Jan 31 05:55:45 localhost kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Jan 31 05:55:45 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 31 05:55:45 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 05:55:45 localhost kernel: BIOS-provided physical RAM map:
Jan 31 05:55:45 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 31 05:55:45 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 31 05:55:45 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 31 05:55:45 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 31 05:55:45 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 31 05:55:45 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 31 05:55:45 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 31 05:55:45 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 31 05:55:45 localhost kernel: NX (Execute Disable) protection: active
Jan 31 05:55:45 localhost kernel: APIC: Static calls initialized
Jan 31 05:55:45 localhost kernel: SMBIOS 2.8 present.
Jan 31 05:55:45 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 31 05:55:45 localhost kernel: Hypervisor detected: KVM
Jan 31 05:55:45 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 31 05:55:45 localhost kernel: kvm-clock: using sched offset of 4400785281 cycles
Jan 31 05:55:45 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 31 05:55:45 localhost kernel: tsc: Detected 2800.000 MHz processor
Jan 31 05:55:45 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 31 05:55:45 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 31 05:55:45 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 31 05:55:45 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 31 05:55:45 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 31 05:55:45 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 31 05:55:45 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 31 05:55:45 localhost kernel: Using GB pages for direct mapping
Jan 31 05:55:45 localhost kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Jan 31 05:55:45 localhost kernel: ACPI: Early table checksum verification disabled
Jan 31 05:55:45 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 31 05:55:45 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 05:55:45 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 05:55:45 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 05:55:45 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 31 05:55:45 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 05:55:45 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 05:55:45 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 31 05:55:45 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 31 05:55:45 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 31 05:55:45 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 31 05:55:45 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 31 05:55:45 localhost kernel: No NUMA configuration found
Jan 31 05:55:45 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 31 05:55:45 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 31 05:55:45 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 31 05:55:45 localhost kernel: Zone ranges:
Jan 31 05:55:45 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 31 05:55:45 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 31 05:55:45 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 05:55:45 localhost kernel:   Device   empty
Jan 31 05:55:45 localhost kernel: Movable zone start for each node
Jan 31 05:55:45 localhost kernel: Early memory node ranges
Jan 31 05:55:45 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 31 05:55:45 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 31 05:55:45 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 05:55:45 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 31 05:55:45 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 31 05:55:45 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 31 05:55:45 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 31 05:55:45 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 31 05:55:45 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 31 05:55:45 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 31 05:55:45 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 31 05:55:45 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 31 05:55:45 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 31 05:55:45 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 31 05:55:45 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 31 05:55:45 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 31 05:55:45 localhost kernel: TSC deadline timer available
Jan 31 05:55:45 localhost kernel: CPU topo: Max. logical packages:   8
Jan 31 05:55:45 localhost kernel: CPU topo: Max. logical dies:       8
Jan 31 05:55:45 localhost kernel: CPU topo: Max. dies per package:   1
Jan 31 05:55:45 localhost kernel: CPU topo: Max. threads per core:   1
Jan 31 05:55:45 localhost kernel: CPU topo: Num. cores per package:     1
Jan 31 05:55:45 localhost kernel: CPU topo: Num. threads per package:   1
Jan 31 05:55:45 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 31 05:55:45 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 31 05:55:45 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 31 05:55:45 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 31 05:55:45 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 31 05:55:45 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 31 05:55:45 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 31 05:55:45 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 31 05:55:45 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 31 05:55:45 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 31 05:55:45 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 31 05:55:45 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 31 05:55:45 localhost kernel: Booting paravirtualized kernel on KVM
Jan 31 05:55:45 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 31 05:55:45 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 31 05:55:45 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 31 05:55:45 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 31 05:55:45 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 31 05:55:45 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 31 05:55:45 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 05:55:45 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Jan 31 05:55:45 localhost kernel: random: crng init done
Jan 31 05:55:45 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 31 05:55:45 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 31 05:55:45 localhost kernel: Fallback order for Node 0: 0 
Jan 31 05:55:45 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 31 05:55:45 localhost kernel: Policy zone: Normal
Jan 31 05:55:45 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 31 05:55:45 localhost kernel: software IO TLB: area num 8.
Jan 31 05:55:45 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 31 05:55:45 localhost kernel: ftrace: allocating 49438 entries in 194 pages
Jan 31 05:55:45 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 31 05:55:45 localhost kernel: Dynamic Preempt: voluntary
Jan 31 05:55:45 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 31 05:55:45 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 31 05:55:45 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 31 05:55:45 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 31 05:55:45 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 31 05:55:45 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 31 05:55:45 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 31 05:55:45 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 31 05:55:45 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 05:55:45 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 05:55:45 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 05:55:45 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 31 05:55:45 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 31 05:55:45 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 31 05:55:45 localhost kernel: Console: colour VGA+ 80x25
Jan 31 05:55:45 localhost kernel: printk: console [ttyS0] enabled
Jan 31 05:55:45 localhost kernel: ACPI: Core revision 20230331
Jan 31 05:55:45 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 31 05:55:45 localhost kernel: x2apic enabled
Jan 31 05:55:45 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 31 05:55:45 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 31 05:55:45 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 31 05:55:45 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 31 05:55:45 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 31 05:55:45 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 31 05:55:45 localhost kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Jan 31 05:55:45 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 31 05:55:45 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 31 05:55:45 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 31 05:55:45 localhost kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Jan 31 05:55:45 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 31 05:55:45 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 31 05:55:45 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 31 05:55:45 localhost kernel: active return thunk: retbleed_return_thunk
Jan 31 05:55:45 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 31 05:55:45 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 31 05:55:45 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 31 05:55:45 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 31 05:55:45 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 31 05:55:45 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 31 05:55:45 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 31 05:55:45 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 31 05:55:45 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 31 05:55:45 localhost kernel: landlock: Up and running.
Jan 31 05:55:45 localhost kernel: Yama: becoming mindful.
Jan 31 05:55:45 localhost kernel: SELinux:  Initializing.
Jan 31 05:55:45 localhost kernel: LSM support for eBPF active
Jan 31 05:55:45 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 05:55:45 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 05:55:45 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 31 05:55:45 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 31 05:55:45 localhost kernel: ... version:                0
Jan 31 05:55:45 localhost kernel: ... bit width:              48
Jan 31 05:55:45 localhost kernel: ... generic registers:      6
Jan 31 05:55:45 localhost kernel: ... value mask:             0000ffffffffffff
Jan 31 05:55:45 localhost kernel: ... max period:             00007fffffffffff
Jan 31 05:55:45 localhost kernel: ... fixed-purpose events:   0
Jan 31 05:55:45 localhost kernel: ... event mask:             000000000000003f
Jan 31 05:55:45 localhost kernel: signal: max sigframe size: 1776
Jan 31 05:55:45 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 31 05:55:45 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 31 05:55:45 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 31 05:55:45 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 31 05:55:45 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 31 05:55:45 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 31 05:55:45 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 31 05:55:45 localhost kernel: node 0 deferred pages initialised in 9ms
Jan 31 05:55:45 localhost kernel: Memory: 7763792K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618408K reserved, 0K cma-reserved)
Jan 31 05:55:45 localhost kernel: devtmpfs: initialized
Jan 31 05:55:45 localhost kernel: x86/mm: Memory block size: 128MB
Jan 31 05:55:45 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 31 05:55:45 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 31 05:55:45 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 31 05:55:45 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 31 05:55:45 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 31 05:55:45 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 31 05:55:45 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 31 05:55:45 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 31 05:55:45 localhost kernel: audit: type=2000 audit(1769838943.231:1): state=initialized audit_enabled=0 res=1
Jan 31 05:55:45 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 31 05:55:45 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 31 05:55:45 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 31 05:55:45 localhost kernel: cpuidle: using governor menu
Jan 31 05:55:45 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 31 05:55:45 localhost kernel: PCI: Using configuration type 1 for base access
Jan 31 05:55:45 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 31 05:55:45 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 31 05:55:45 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 31 05:55:45 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 31 05:55:45 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 31 05:55:45 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 31 05:55:45 localhost kernel: Demotion targets for Node 0: null
Jan 31 05:55:45 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 31 05:55:45 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 31 05:55:45 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 31 05:55:45 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 31 05:55:45 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 31 05:55:45 localhost kernel: ACPI: Interpreter enabled
Jan 31 05:55:45 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 31 05:55:45 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 31 05:55:45 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 31 05:55:45 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 31 05:55:45 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 31 05:55:45 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 31 05:55:45 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [3] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [4] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [5] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [6] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [7] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [8] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [9] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [10] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [11] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [12] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [13] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [14] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [15] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [16] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [17] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [18] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [19] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [20] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [21] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [22] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [23] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [24] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [25] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [26] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [27] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [28] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [29] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [30] registered
Jan 31 05:55:45 localhost kernel: acpiphp: Slot [31] registered
Jan 31 05:55:45 localhost kernel: PCI host bridge to bus 0000:00
Jan 31 05:55:45 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 31 05:55:45 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 31 05:55:45 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 31 05:55:45 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 31 05:55:45 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 31 05:55:45 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 31 05:55:45 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 31 05:55:45 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 31 05:55:45 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 31 05:55:45 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 31 05:55:45 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 31 05:55:45 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 31 05:55:45 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 31 05:55:45 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 31 05:55:45 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 31 05:55:45 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 31 05:55:45 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 31 05:55:45 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 31 05:55:45 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 31 05:55:45 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 31 05:55:45 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 31 05:55:45 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 31 05:55:45 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 31 05:55:45 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 31 05:55:45 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 31 05:55:45 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 05:55:45 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 31 05:55:45 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 31 05:55:45 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 31 05:55:45 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 31 05:55:45 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 31 05:55:45 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 31 05:55:45 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 31 05:55:45 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 31 05:55:45 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 05:55:45 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 31 05:55:45 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 31 05:55:45 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 05:55:45 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 31 05:55:45 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 31 05:55:45 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 31 05:55:45 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 31 05:55:45 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 31 05:55:45 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 31 05:55:45 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 31 05:55:45 localhost kernel: iommu: Default domain type: Translated
Jan 31 05:55:45 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 31 05:55:45 localhost kernel: SCSI subsystem initialized
Jan 31 05:55:45 localhost kernel: ACPI: bus type USB registered
Jan 31 05:55:45 localhost kernel: usbcore: registered new interface driver usbfs
Jan 31 05:55:45 localhost kernel: usbcore: registered new interface driver hub
Jan 31 05:55:45 localhost kernel: usbcore: registered new device driver usb
Jan 31 05:55:45 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 31 05:55:45 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 31 05:55:45 localhost kernel: PTP clock support registered
Jan 31 05:55:45 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 31 05:55:45 localhost kernel: NetLabel: Initializing
Jan 31 05:55:45 localhost kernel: NetLabel:  domain hash size = 128
Jan 31 05:55:45 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 31 05:55:45 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 31 05:55:45 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 31 05:55:45 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 31 05:55:45 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 31 05:55:45 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 31 05:55:45 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 31 05:55:45 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 31 05:55:45 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 31 05:55:45 localhost kernel: vgaarb: loaded
Jan 31 05:55:45 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 31 05:55:45 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 31 05:55:45 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 31 05:55:45 localhost kernel: pnp: PnP ACPI init
Jan 31 05:55:45 localhost kernel: pnp 00:03: [dma 2]
Jan 31 05:55:45 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 31 05:55:45 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 31 05:55:45 localhost kernel: NET: Registered PF_INET protocol family
Jan 31 05:55:45 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 31 05:55:45 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 31 05:55:45 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 31 05:55:45 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 31 05:55:45 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 31 05:55:45 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 31 05:55:45 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 31 05:55:45 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 05:55:45 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 05:55:45 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 31 05:55:45 localhost kernel: NET: Registered PF_XDP protocol family
Jan 31 05:55:45 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 31 05:55:45 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 31 05:55:45 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 31 05:55:45 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 31 05:55:45 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 31 05:55:45 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 31 05:55:45 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 31 05:55:45 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 31 05:55:45 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 47119 usecs
Jan 31 05:55:45 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 31 05:55:45 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 31 05:55:45 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 31 05:55:45 localhost kernel: ACPI: bus type thunderbolt registered
Jan 31 05:55:45 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 31 05:55:45 localhost kernel: Initialise system trusted keyrings
Jan 31 05:55:45 localhost kernel: Key type blacklist registered
Jan 31 05:55:45 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 31 05:55:45 localhost kernel: zbud: loaded
Jan 31 05:55:45 localhost kernel: integrity: Platform Keyring initialized
Jan 31 05:55:45 localhost kernel: integrity: Machine keyring initialized
Jan 31 05:55:45 localhost kernel: Freeing initrd memory: 88000K
Jan 31 05:55:45 localhost kernel: NET: Registered PF_ALG protocol family
Jan 31 05:55:45 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 31 05:55:45 localhost kernel: Key type asymmetric registered
Jan 31 05:55:45 localhost kernel: Asymmetric key parser 'x509' registered
Jan 31 05:55:45 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 31 05:55:45 localhost kernel: io scheduler mq-deadline registered
Jan 31 05:55:45 localhost kernel: io scheduler kyber registered
Jan 31 05:55:45 localhost kernel: io scheduler bfq registered
Jan 31 05:55:45 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 31 05:55:45 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 31 05:55:45 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 31 05:55:45 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 31 05:55:45 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 31 05:55:45 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 31 05:55:45 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 31 05:55:45 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 31 05:55:45 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 31 05:55:45 localhost kernel: Non-volatile memory driver v1.3
Jan 31 05:55:45 localhost kernel: rdac: device handler registered
Jan 31 05:55:45 localhost kernel: hp_sw: device handler registered
Jan 31 05:55:45 localhost kernel: emc: device handler registered
Jan 31 05:55:45 localhost kernel: alua: device handler registered
Jan 31 05:55:45 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 31 05:55:45 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 31 05:55:45 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 31 05:55:45 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 31 05:55:45 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 31 05:55:45 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 31 05:55:45 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 31 05:55:45 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Jan 31 05:55:45 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 31 05:55:45 localhost kernel: hub 1-0:1.0: USB hub found
Jan 31 05:55:45 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 31 05:55:45 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 31 05:55:45 localhost kernel: usbserial: USB Serial support registered for generic
Jan 31 05:55:45 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 31 05:55:45 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 31 05:55:45 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 31 05:55:45 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 31 05:55:45 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 31 05:55:45 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 31 05:55:45 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-31T05:55:44 UTC (1769838944)
Jan 31 05:55:45 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 31 05:55:45 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 31 05:55:45 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 31 05:55:45 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 31 05:55:45 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 31 05:55:45 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 31 05:55:45 localhost kernel: usbcore: registered new interface driver usbhid
Jan 31 05:55:45 localhost kernel: usbhid: USB HID core driver
Jan 31 05:55:45 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 31 05:55:45 localhost kernel: Initializing XFRM netlink socket
Jan 31 05:55:45 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 31 05:55:45 localhost kernel: Segment Routing with IPv6
Jan 31 05:55:45 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 31 05:55:45 localhost kernel: mpls_gso: MPLS GSO support
Jan 31 05:55:45 localhost kernel: IPI shorthand broadcast: enabled
Jan 31 05:55:45 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 31 05:55:45 localhost kernel: AES CTR mode by8 optimization enabled
Jan 31 05:55:45 localhost kernel: sched_clock: Marking stable (960005250, 148078810)->(1219384780, -111300720)
Jan 31 05:55:45 localhost kernel: registered taskstats version 1
Jan 31 05:55:45 localhost kernel: Loading compiled-in X.509 certificates
Jan 31 05:55:45 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 05:55:45 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 31 05:55:45 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 31 05:55:45 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 31 05:55:45 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 31 05:55:45 localhost kernel: Demotion targets for Node 0: null
Jan 31 05:55:45 localhost kernel: page_owner is disabled
Jan 31 05:55:45 localhost kernel: Key type .fscrypt registered
Jan 31 05:55:45 localhost kernel: Key type fscrypt-provisioning registered
Jan 31 05:55:45 localhost kernel: Key type big_key registered
Jan 31 05:55:45 localhost kernel: Key type encrypted registered
Jan 31 05:55:45 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 31 05:55:45 localhost kernel: Loading compiled-in module X.509 certificates
Jan 31 05:55:45 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 05:55:45 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 31 05:55:45 localhost kernel: ima: No architecture policies found
Jan 31 05:55:45 localhost kernel: evm: Initialising EVM extended attributes:
Jan 31 05:55:45 localhost kernel: evm: security.selinux
Jan 31 05:55:45 localhost kernel: evm: security.SMACK64 (disabled)
Jan 31 05:55:45 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 31 05:55:45 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 31 05:55:45 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 31 05:55:45 localhost kernel: evm: security.apparmor (disabled)
Jan 31 05:55:45 localhost kernel: evm: security.ima
Jan 31 05:55:45 localhost kernel: evm: security.capability
Jan 31 05:55:45 localhost kernel: evm: HMAC attrs: 0x1
Jan 31 05:55:45 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 31 05:55:45 localhost kernel: Running certificate verification RSA selftest
Jan 31 05:55:45 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 31 05:55:45 localhost kernel: Running certificate verification ECDSA selftest
Jan 31 05:55:45 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 31 05:55:45 localhost kernel: clk: Disabling unused clocks
Jan 31 05:55:45 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 31 05:55:45 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Jan 31 05:55:45 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 31 05:55:45 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Jan 31 05:55:45 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 31 05:55:45 localhost kernel: Run /init as init process
Jan 31 05:55:45 localhost kernel:   with arguments:
Jan 31 05:55:45 localhost kernel:     /init
Jan 31 05:55:45 localhost kernel:   with environment:
Jan 31 05:55:45 localhost kernel:     HOME=/
Jan 31 05:55:45 localhost kernel:     TERM=linux
Jan 31 05:55:45 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64
Jan 31 05:55:45 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 05:55:45 localhost systemd[1]: Detected virtualization kvm.
Jan 31 05:55:45 localhost systemd[1]: Detected architecture x86-64.
Jan 31 05:55:45 localhost systemd[1]: Running in initrd.
Jan 31 05:55:45 localhost systemd[1]: No hostname configured, using default hostname.
Jan 31 05:55:45 localhost systemd[1]: Hostname set to <localhost>.
Jan 31 05:55:45 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 31 05:55:45 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 31 05:55:45 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 31 05:55:45 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 31 05:55:45 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 31 05:55:45 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 31 05:55:45 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 31 05:55:45 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 31 05:55:45 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 31 05:55:45 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 05:55:45 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 31 05:55:45 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 31 05:55:45 localhost systemd[1]: Reached target Local File Systems.
Jan 31 05:55:45 localhost systemd[1]: Reached target Path Units.
Jan 31 05:55:45 localhost systemd[1]: Reached target Slice Units.
Jan 31 05:55:45 localhost systemd[1]: Reached target Swaps.
Jan 31 05:55:45 localhost systemd[1]: Reached target Timer Units.
Jan 31 05:55:45 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 31 05:55:45 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 31 05:55:45 localhost systemd[1]: Listening on Journal Socket.
Jan 31 05:55:45 localhost systemd[1]: Listening on udev Control Socket.
Jan 31 05:55:45 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 31 05:55:45 localhost systemd[1]: Reached target Socket Units.
Jan 31 05:55:45 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 31 05:55:45 localhost systemd[1]: Starting Journal Service...
Jan 31 05:55:45 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 05:55:45 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 31 05:55:45 localhost systemd[1]: Starting Create System Users...
Jan 31 05:55:45 localhost systemd[1]: Starting Setup Virtual Console...
Jan 31 05:55:45 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 31 05:55:45 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 31 05:55:45 localhost systemd[1]: Finished Create System Users.
Jan 31 05:55:45 localhost systemd-journald[306]: Journal started
Jan 31 05:55:45 localhost systemd-journald[306]: Runtime Journal (/run/log/journal/04875c5209e74b899409fc64b5773d83) is 8.0M, max 153.6M, 145.6M free.
Jan 31 05:55:45 localhost systemd-sysusers[310]: Creating group 'users' with GID 100.
Jan 31 05:55:45 localhost systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Jan 31 05:55:45 localhost systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 31 05:55:45 localhost systemd[1]: Started Journal Service.
Jan 31 05:55:45 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 05:55:45 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 05:55:45 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 05:55:45 localhost systemd[1]: Finished Setup Virtual Console.
Jan 31 05:55:45 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 31 05:55:45 localhost systemd[1]: Starting dracut cmdline hook...
Jan 31 05:55:45 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 05:55:45 localhost dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Jan 31 05:55:45 localhost dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 05:55:45 localhost systemd[1]: Finished dracut cmdline hook.
Jan 31 05:55:45 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 31 05:55:45 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 31 05:55:45 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 31 05:55:45 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 31 05:55:45 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 31 05:55:45 localhost kernel: RPC: Registered udp transport module.
Jan 31 05:55:45 localhost kernel: RPC: Registered tcp transport module.
Jan 31 05:55:45 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 31 05:55:45 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 31 05:55:45 localhost rpc.statd[443]: Version 2.5.4 starting
Jan 31 05:55:45 localhost rpc.statd[443]: Initializing NSM state
Jan 31 05:55:45 localhost rpc.idmapd[448]: Setting log level to 0
Jan 31 05:55:45 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 31 05:55:45 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 05:55:45 localhost systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 05:55:45 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 05:55:45 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 31 05:55:45 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 31 05:55:45 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 31 05:55:45 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 31 05:55:45 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 05:55:45 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 31 05:55:45 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 05:55:45 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 05:55:45 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 05:55:45 localhost systemd[1]: Reached target Network.
Jan 31 05:55:45 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 05:55:45 localhost systemd[1]: Starting dracut initqueue hook...
Jan 31 05:55:45 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 31 05:55:45 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 31 05:55:45 localhost systemd-udevd[492]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 05:55:45 localhost kernel:  vda: vda1
Jan 31 05:55:45 localhost kernel: libata version 3.00 loaded.
Jan 31 05:55:45 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 31 05:55:45 localhost kernel: scsi host0: ata_piix
Jan 31 05:55:45 localhost kernel: scsi host1: ata_piix
Jan 31 05:55:45 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 31 05:55:45 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 31 05:55:45 localhost systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 05:55:45 localhost systemd[1]: Reached target Initrd Root Device.
Jan 31 05:55:45 localhost kernel: ata1: found unknown device (class 0)
Jan 31 05:55:46 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 31 05:55:46 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 31 05:55:46 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 31 05:55:46 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 31 05:55:46 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 31 05:55:46 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 31 05:55:46 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 31 05:55:46 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 31 05:55:46 localhost systemd[1]: Reached target System Initialization.
Jan 31 05:55:46 localhost systemd[1]: Reached target Basic System.
Jan 31 05:55:46 localhost systemd[1]: Finished dracut initqueue hook.
Jan 31 05:55:46 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 05:55:46 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 31 05:55:46 localhost systemd[1]: Reached target Remote File Systems.
Jan 31 05:55:46 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 31 05:55:46 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 31 05:55:46 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Jan 31 05:55:46 localhost systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Jan 31 05:55:46 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 05:55:46 localhost systemd[1]: Mounting /sysroot...
Jan 31 05:55:46 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 31 05:55:46 localhost kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Jan 31 05:55:46 localhost kernel: XFS (vda1): Ending clean mount
Jan 31 05:55:46 localhost systemd[1]: Mounted /sysroot.
Jan 31 05:55:46 localhost systemd[1]: Reached target Initrd Root File System.
Jan 31 05:55:46 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 31 05:55:46 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 31 05:55:46 localhost systemd[1]: Reached target Initrd File Systems.
Jan 31 05:55:46 localhost systemd[1]: Reached target Initrd Default Target.
Jan 31 05:55:46 localhost systemd[1]: Starting dracut mount hook...
Jan 31 05:55:46 localhost systemd[1]: Finished dracut mount hook.
Jan 31 05:55:46 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 31 05:55:46 localhost rpc.idmapd[448]: exiting on signal 15
Jan 31 05:55:46 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 31 05:55:46 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 31 05:55:46 localhost systemd[1]: Stopped target Network.
Jan 31 05:55:46 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 31 05:55:46 localhost systemd[1]: Stopped target Timer Units.
Jan 31 05:55:46 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 31 05:55:46 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 31 05:55:46 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 31 05:55:46 localhost systemd[1]: Stopped target Basic System.
Jan 31 05:55:46 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 31 05:55:46 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 31 05:55:46 localhost systemd[1]: Stopped target Path Units.
Jan 31 05:55:46 localhost systemd[1]: Stopped target Remote File Systems.
Jan 31 05:55:46 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 31 05:55:46 localhost systemd[1]: Stopped target Slice Units.
Jan 31 05:55:46 localhost systemd[1]: Stopped target Socket Units.
Jan 31 05:55:46 localhost systemd[1]: Stopped target System Initialization.
Jan 31 05:55:46 localhost systemd[1]: Stopped target Local File Systems.
Jan 31 05:55:46 localhost systemd[1]: Stopped target Swaps.
Jan 31 05:55:46 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Stopped dracut mount hook.
Jan 31 05:55:46 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 31 05:55:46 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 31 05:55:46 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 31 05:55:46 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 31 05:55:46 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 31 05:55:46 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 31 05:55:46 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 31 05:55:46 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 31 05:55:46 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 31 05:55:46 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 31 05:55:46 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 31 05:55:46 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 31 05:55:46 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Closed udev Control Socket.
Jan 31 05:55:46 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Closed udev Kernel Socket.
Jan 31 05:55:46 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 31 05:55:46 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 31 05:55:46 localhost systemd[1]: Starting Cleanup udev Database...
Jan 31 05:55:46 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 31 05:55:46 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 31 05:55:46 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Stopped Create System Users.
Jan 31 05:55:46 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 31 05:55:46 localhost systemd[1]: Finished Cleanup udev Database.
Jan 31 05:55:46 localhost systemd[1]: Reached target Switch Root.
Jan 31 05:55:46 localhost systemd[1]: Starting Switch Root...
Jan 31 05:55:47 localhost systemd[1]: Switching root.
Jan 31 05:55:47 localhost systemd-journald[306]: Journal stopped
Jan 31 05:55:48 localhost systemd-journald[306]: Received SIGTERM from PID 1 (systemd).
Jan 31 05:55:48 localhost kernel: audit: type=1404 audit(1769838947.216:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 31 05:55:48 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 05:55:48 localhost kernel: SELinux:  policy capability open_perms=1
Jan 31 05:55:48 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 05:55:48 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 31 05:55:48 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 05:55:48 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 05:55:48 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 05:55:48 localhost kernel: audit: type=1403 audit(1769838947.327:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 31 05:55:48 localhost systemd[1]: Successfully loaded SELinux policy in 115.587ms.
Jan 31 05:55:48 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 34.279ms.
Jan 31 05:55:48 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 05:55:48 localhost systemd[1]: Detected virtualization kvm.
Jan 31 05:55:48 localhost systemd[1]: Detected architecture x86-64.
Jan 31 05:55:48 localhost systemd-rc-local-generator[638]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:55:48 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 31 05:55:48 localhost systemd[1]: Stopped Switch Root.
Jan 31 05:55:48 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 31 05:55:48 localhost systemd[1]: Created slice Slice /system/getty.
Jan 31 05:55:48 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 31 05:55:48 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 31 05:55:48 localhost systemd[1]: Created slice User and Session Slice.
Jan 31 05:55:48 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 05:55:48 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 31 05:55:48 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 31 05:55:48 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 31 05:55:48 localhost systemd[1]: Stopped target Switch Root.
Jan 31 05:55:48 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 31 05:55:48 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 31 05:55:48 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 31 05:55:48 localhost systemd[1]: Reached target Path Units.
Jan 31 05:55:48 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 31 05:55:48 localhost systemd[1]: Reached target Slice Units.
Jan 31 05:55:48 localhost systemd[1]: Reached target Swaps.
Jan 31 05:55:48 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 31 05:55:48 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 31 05:55:48 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 31 05:55:48 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 31 05:55:48 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 31 05:55:48 localhost systemd[1]: Listening on udev Control Socket.
Jan 31 05:55:48 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 31 05:55:48 localhost systemd[1]: Mounting Huge Pages File System...
Jan 31 05:55:48 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 31 05:55:48 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 31 05:55:48 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 31 05:55:48 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 05:55:48 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 31 05:55:48 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 05:55:48 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 31 05:55:48 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 31 05:55:48 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 31 05:55:48 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 31 05:55:48 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 31 05:55:48 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 31 05:55:48 localhost systemd[1]: Stopped Journal Service.
Jan 31 05:55:48 localhost systemd[1]: Starting Journal Service...
Jan 31 05:55:48 localhost kernel: fuse: init (API version 7.37)
Jan 31 05:55:48 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 05:55:48 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 31 05:55:48 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 05:55:48 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 31 05:55:48 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 31 05:55:48 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 31 05:55:48 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 31 05:55:48 localhost systemd-journald[679]: Journal started
Jan 31 05:55:48 localhost systemd-journald[679]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 05:55:47 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 31 05:55:47 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 31 05:55:48 localhost systemd[1]: Started Journal Service.
Jan 31 05:55:48 localhost systemd[1]: Mounted Huge Pages File System.
Jan 31 05:55:48 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 31 05:55:48 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 31 05:55:48 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 31 05:55:48 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 31 05:55:48 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 05:55:48 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 05:55:48 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 31 05:55:48 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 31 05:55:48 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 31 05:55:48 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 31 05:55:48 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 31 05:55:48 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 31 05:55:48 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:48 localhost kernel: ACPI: bus type drm_connector registered
Jan 31 05:55:48 localhost systemd[1]: Mounting FUSE Control File System...
Jan 31 05:55:48 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 31 05:55:48 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 31 05:55:48 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 31 05:55:48 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 31 05:55:48 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 05:55:48 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 31 05:55:48 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 31 05:55:48 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 31 05:55:48 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 31 05:55:48 localhost systemd[1]: Starting Create System Users...
Jan 31 05:55:48 localhost systemd[1]: Mounted FUSE Control File System.
Jan 31 05:55:48 localhost systemd-journald[679]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 05:55:48 localhost systemd-journald[679]: Received client request to flush runtime journal.
Jan 31 05:55:48 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 31 05:55:48 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 31 05:55:48 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 31 05:55:48 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 05:55:48 localhost systemd[1]: Finished Create System Users.
Jan 31 05:55:48 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 05:55:48 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 05:55:48 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 31 05:55:48 localhost systemd[1]: Reached target Local File Systems.
Jan 31 05:55:48 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 31 05:55:48 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 31 05:55:48 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 31 05:55:48 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 31 05:55:48 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 31 05:55:48 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 31 05:55:48 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 05:55:48 localhost bootctl[695]: Couldn't find EFI system partition, skipping.
Jan 31 05:55:48 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 31 05:55:48 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 05:55:48 localhost systemd[1]: Starting Security Auditing Service...
Jan 31 05:55:48 localhost systemd[1]: Starting RPC Bind...
Jan 31 05:55:48 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 31 05:55:48 localhost auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 31 05:55:48 localhost auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 31 05:55:48 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 31 05:55:48 localhost systemd[1]: Started RPC Bind.
Jan 31 05:55:48 localhost augenrules[706]: /sbin/augenrules: No change
Jan 31 05:55:48 localhost augenrules[721]: No rules
Jan 31 05:55:48 localhost augenrules[721]: enabled 1
Jan 31 05:55:48 localhost augenrules[721]: failure 1
Jan 31 05:55:48 localhost augenrules[721]: pid 701
Jan 31 05:55:48 localhost augenrules[721]: rate_limit 0
Jan 31 05:55:48 localhost augenrules[721]: backlog_limit 8192
Jan 31 05:55:48 localhost augenrules[721]: lost 0
Jan 31 05:55:48 localhost augenrules[721]: backlog 4
Jan 31 05:55:48 localhost augenrules[721]: backlog_wait_time 60000
Jan 31 05:55:48 localhost augenrules[721]: backlog_wait_time_actual 0
Jan 31 05:55:48 localhost augenrules[721]: enabled 1
Jan 31 05:55:48 localhost augenrules[721]: failure 1
Jan 31 05:55:48 localhost augenrules[721]: pid 701
Jan 31 05:55:48 localhost augenrules[721]: rate_limit 0
Jan 31 05:55:48 localhost augenrules[721]: backlog_limit 8192
Jan 31 05:55:48 localhost augenrules[721]: lost 0
Jan 31 05:55:48 localhost augenrules[721]: backlog 4
Jan 31 05:55:48 localhost augenrules[721]: backlog_wait_time 60000
Jan 31 05:55:48 localhost augenrules[721]: backlog_wait_time_actual 0
Jan 31 05:55:48 localhost augenrules[721]: enabled 1
Jan 31 05:55:48 localhost augenrules[721]: failure 1
Jan 31 05:55:48 localhost augenrules[721]: pid 701
Jan 31 05:55:48 localhost augenrules[721]: rate_limit 0
Jan 31 05:55:48 localhost augenrules[721]: backlog_limit 8192
Jan 31 05:55:48 localhost augenrules[721]: lost 0
Jan 31 05:55:48 localhost augenrules[721]: backlog 3
Jan 31 05:55:48 localhost augenrules[721]: backlog_wait_time 60000
Jan 31 05:55:48 localhost augenrules[721]: backlog_wait_time_actual 0
Jan 31 05:55:48 localhost systemd[1]: Started Security Auditing Service.
Jan 31 05:55:48 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 31 05:55:48 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 31 05:55:48 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 31 05:55:48 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 05:55:48 localhost systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 05:55:48 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 31 05:55:48 localhost systemd[1]: Starting Update is Completed...
Jan 31 05:55:48 localhost systemd[1]: Finished Update is Completed.
Jan 31 05:55:48 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 05:55:48 localhost systemd[1]: Reached target System Initialization.
Jan 31 05:55:48 localhost systemd[1]: Started dnf makecache --timer.
Jan 31 05:55:48 localhost systemd[1]: Started Daily rotation of log files.
Jan 31 05:55:48 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 31 05:55:48 localhost systemd[1]: Reached target Timer Units.
Jan 31 05:55:48 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 31 05:55:48 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 31 05:55:48 localhost systemd[1]: Reached target Socket Units.
Jan 31 05:55:48 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 31 05:55:48 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 05:55:49 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 31 05:55:49 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 05:55:49 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 05:55:49 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 05:55:49 localhost systemd-udevd[734]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 05:55:49 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 31 05:55:49 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 31 05:55:49 localhost systemd[1]: Reached target Basic System.
Jan 31 05:55:49 localhost dbus-broker-lau[765]: Ready
Jan 31 05:55:49 localhost systemd[1]: Starting NTP client/server...
Jan 31 05:55:49 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 31 05:55:49 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 31 05:55:49 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 31 05:55:49 localhost systemd[1]: Started irqbalance daemon.
Jan 31 05:55:49 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 31 05:55:49 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 05:55:49 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 05:55:49 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 05:55:49 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 31 05:55:49 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 31 05:55:49 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 31 05:55:49 localhost systemd[1]: Starting User Login Management...
Jan 31 05:55:49 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 31 05:55:49 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 31 05:55:49 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 31 05:55:49 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 31 05:55:49 localhost chronyd[796]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 05:55:49 localhost chronyd[796]: Loaded 0 symmetric keys
Jan 31 05:55:49 localhost chronyd[796]: Using right/UTC timezone to obtain leap second data
Jan 31 05:55:49 localhost chronyd[796]: Loaded seccomp filter (level 2)
Jan 31 05:55:49 localhost systemd[1]: Started NTP client/server.
Jan 31 05:55:49 localhost systemd-logind[788]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 05:55:49 localhost systemd-logind[788]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 05:55:49 localhost systemd-logind[788]: New seat seat0.
Jan 31 05:55:49 localhost systemd[1]: Started User Login Management.
Jan 31 05:55:49 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 31 05:55:49 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 31 05:55:49 localhost kernel: Console: switching to colour dummy device 80x25
Jan 31 05:55:49 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 31 05:55:49 localhost kernel: [drm] features: -context_init
Jan 31 05:55:49 localhost kernel: [drm] number of scanouts: 1
Jan 31 05:55:49 localhost kernel: [drm] number of cap sets: 0
Jan 31 05:55:49 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 31 05:55:49 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 31 05:55:49 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 31 05:55:49 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 31 05:55:49 localhost kernel: kvm_amd: TSC scaling supported
Jan 31 05:55:49 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 31 05:55:49 localhost kernel: kvm_amd: Nested Paging enabled
Jan 31 05:55:49 localhost kernel: kvm_amd: LBR virtualization supported
Jan 31 05:55:49 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 31 05:55:49 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 31 05:55:49 localhost iptables.init[783]: iptables: Applying firewall rules: [  OK  ]
Jan 31 05:55:49 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 31 05:55:49 localhost cloud-init[839]: Cloud-init v. 24.4-8.el9 running 'init-local' at Sat, 31 Jan 2026 05:55:49 +0000. Up 6.26 seconds.
Jan 31 05:55:50 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 31 05:55:50 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 31 05:55:50 localhost systemd[1]: run-cloud\x2dinit-tmp-tmppt5r4tf2.mount: Deactivated successfully.
Jan 31 05:55:50 localhost systemd[1]: Starting Hostname Service...
Jan 31 05:55:50 localhost systemd[1]: Started Hostname Service.
Jan 31 05:55:50 np0005603542.novalocal systemd-hostnamed[853]: Hostname set to <np0005603542.novalocal> (static)
Jan 31 05:55:50 np0005603542.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 31 05:55:50 np0005603542.novalocal systemd[1]: Reached target Preparation for Network.
Jan 31 05:55:50 np0005603542.novalocal systemd[1]: Starting Network Manager...
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4139] NetworkManager (version 1.54.3-2.el9) is starting... (boot:0f1e2a66-84e3-44e9-8fdb-17905db9d508)
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4144] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4278] manager[0x55d92f7f7000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4327] hostname: hostname: using hostnamed
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4328] hostname: static hostname changed from (none) to "np0005603542.novalocal"
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4332] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4449] manager[0x55d92f7f7000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4450] manager[0x55d92f7f7000]: rfkill: WWAN hardware radio set enabled
Jan 31 05:55:50 np0005603542.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4533] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4533] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4533] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4534] manager: Networking is enabled by state file
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4535] settings: Loaded settings plugin: keyfile (internal)
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4573] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4597] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4612] dhcp: init: Using DHCP client 'internal'
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4617] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4629] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4640] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4652] device (lo): Activation: starting connection 'lo' (12701cd6-486a-4822-b29d-5b142e7d4428)
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4659] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4661] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 05:55:50 np0005603542.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 05:55:50 np0005603542.novalocal systemd[1]: Started Network Manager.
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4728] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4732] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4735] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4740] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4743] device (eth0): carrier: link connected
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4746] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 05:55:50 np0005603542.novalocal systemd[1]: Reached target Network.
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4753] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4761] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4769] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4771] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4774] manager: NetworkManager state is now CONNECTING
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4776] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4782] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4786] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 05:55:50 np0005603542.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 31 05:55:50 np0005603542.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4840] dhcp4 (eth0): state changed new lease, address=38.102.83.128
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4847] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4868] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 05:55:50 np0005603542.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4974] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4978] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.4985] device (lo): Activation: successful, device activated.
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.5011] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.5013] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.5016] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.5021] device (eth0): Activation: successful, device activated.
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.5026] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 05:55:50 np0005603542.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 31 05:55:50 np0005603542.novalocal NetworkManager[857]: <info>  [1769838950.5034] manager: startup complete
Jan 31 05:55:50 np0005603542.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 05:55:50 np0005603542.novalocal systemd[1]: Reached target NFS client services.
Jan 31 05:55:50 np0005603542.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 05:55:50 np0005603542.novalocal systemd[1]: Reached target Remote File Systems.
Jan 31 05:55:50 np0005603542.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 05:55:50 np0005603542.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 31 05:55:50 np0005603542.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: Cloud-init v. 24.4-8.el9 running 'init' at Sat, 31 Jan 2026 05:55:50 +0000. Up 7.20 seconds.
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: |  eth0  | True |        38.102.83.128         | 255.255.255.0 | global | fa:16:3e:f5:18:0d |
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: |  eth0  | True | fe80::f816:3eff:fef5:180d/64 |       .       |  link  | fa:16:3e:f5:18:0d |
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 31 05:55:50 np0005603542.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 05:55:51 np0005603542.novalocal useradd[987]: new group: name=cloud-user, GID=1001
Jan 31 05:55:51 np0005603542.novalocal useradd[987]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 31 05:55:51 np0005603542.novalocal useradd[987]: add 'cloud-user' to group 'adm'
Jan 31 05:55:51 np0005603542.novalocal useradd[987]: add 'cloud-user' to group 'systemd-journal'
Jan 31 05:55:51 np0005603542.novalocal useradd[987]: add 'cloud-user' to shadow group 'adm'
Jan 31 05:55:51 np0005603542.novalocal useradd[987]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: Generating public/private rsa key pair.
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: The key fingerprint is:
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: SHA256:FrZEJ3U9bC8D9v9367Og8R2iOZRn+cas/XSDU25TR2o root@np0005603542.novalocal
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: The key's randomart image is:
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: +---[RSA 3072]----+
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |        o.o .o   |
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |       . o .o =  |
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |        +  . + o.|
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |       o o    +o.|
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |        S   . E=o|
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |       .   o =+ +|
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |          ..o=+*+|
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |           .* BBO|
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |           +.o+*O|
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: +----[SHA256]-----+
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: Generating public/private ecdsa key pair.
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: The key fingerprint is:
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: SHA256:jRkv6NElkBg+xCAHEQlk5SJFuz+RWUOBxG6O9x7VHo0 root@np0005603542.novalocal
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: The key's randomart image is:
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: +---[ECDSA 256]---+
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |OO**+++o         |
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |o+.+=...         |
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |...oo o o .      |
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |. ..o= + X o     |
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |  .++ o S E .    |
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |  ..oo o o .     |
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |   .o.o   .      |
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |     ...         |
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |     ..          |
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: +----[SHA256]-----+
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: Generating public/private ed25519 key pair.
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: The key fingerprint is:
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: SHA256:cLJx9oE7miPteVC1xlN3Wl7emg0o/bjwJaJZsJjhoOQ root@np0005603542.novalocal
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: The key's randomart image is:
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: +--[ED25519 256]--+
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |                 |
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |         .. . . +|
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |      + =o.+ o *o|
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |  . . .Ooo*.o o +|
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: | o . o.=S+.o o = |
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |  E  .=o..+ o = .|
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |    . =. + + +   |
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |     o o+   o    |
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: |      o.         |
Jan 31 05:55:51 np0005603542.novalocal cloud-init[920]: +----[SHA256]-----+
Jan 31 05:55:52 np0005603542.novalocal sm-notify[1003]: Version 2.5.4 starting
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 31 05:55:52 np0005603542.novalocal sshd[1005]: Server listening on 0.0.0.0 port 22.
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 31 05:55:52 np0005603542.novalocal sshd[1005]: Server listening on :: port 22.
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Reached target Network is Online.
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Starting System Logging Service...
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Starting Permit User Sessions...
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Finished Permit User Sessions.
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Started Command Scheduler.
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Started Getty on tty1.
Jan 31 05:55:52 np0005603542.novalocal crond[1008]: (CRON) STARTUP (1.5.7)
Jan 31 05:55:52 np0005603542.novalocal crond[1008]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 31 05:55:52 np0005603542.novalocal crond[1008]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 84% if used.)
Jan 31 05:55:52 np0005603542.novalocal crond[1008]: (CRON) INFO (running with inotify support)
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Reached target Login Prompts.
Jan 31 05:55:52 np0005603542.novalocal rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Started System Logging Service.
Jan 31 05:55:52 np0005603542.novalocal rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Reached target Multi-User System.
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 31 05:55:52 np0005603542.novalocal rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 05:55:52 np0005603542.novalocal kdumpctl[1012]: kdump: No kdump initial ramdisk found.
Jan 31 05:55:52 np0005603542.novalocal kdumpctl[1012]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Jan 31 05:55:52 np0005603542.novalocal cloud-init[1128]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Sat, 31 Jan 2026 05:55:52 +0000. Up 8.76 seconds.
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 31 05:55:52 np0005603542.novalocal dracut[1266]: dracut-057-102.git20250818.el9
Jan 31 05:55:52 np0005603542.novalocal cloud-init[1284]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Sat, 31 Jan 2026 05:55:52 +0000. Up 9.09 seconds.
Jan 31 05:55:52 np0005603542.novalocal dracut[1268]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Jan 31 05:55:52 np0005603542.novalocal cloud-init[1308]: #############################################################
Jan 31 05:55:52 np0005603542.novalocal cloud-init[1310]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 31 05:55:52 np0005603542.novalocal cloud-init[1317]: 256 SHA256:jRkv6NElkBg+xCAHEQlk5SJFuz+RWUOBxG6O9x7VHo0 root@np0005603542.novalocal (ECDSA)
Jan 31 05:55:52 np0005603542.novalocal cloud-init[1323]: 256 SHA256:cLJx9oE7miPteVC1xlN3Wl7emg0o/bjwJaJZsJjhoOQ root@np0005603542.novalocal (ED25519)
Jan 31 05:55:52 np0005603542.novalocal cloud-init[1330]: 3072 SHA256:FrZEJ3U9bC8D9v9367Og8R2iOZRn+cas/XSDU25TR2o root@np0005603542.novalocal (RSA)
Jan 31 05:55:52 np0005603542.novalocal cloud-init[1332]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 31 05:55:52 np0005603542.novalocal cloud-init[1338]: #############################################################
Jan 31 05:55:52 np0005603542.novalocal cloud-init[1284]: Cloud-init v. 24.4-8.el9 finished at Sat, 31 Jan 2026 05:55:52 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.27 seconds
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 31 05:55:52 np0005603542.novalocal systemd[1]: Reached target Cloud-init target.
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 05:55:53 np0005603542.novalocal sshd-session[1568]: Connection reset by 38.102.83.114 port 47246 [preauth]
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 05:55:53 np0005603542.novalocal sshd-session[1591]: Unable to negotiate with 38.102.83.114 port 47256: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 31 05:55:53 np0005603542.novalocal sshd-session[1607]: Connection reset by 38.102.83.114 port 47270 [preauth]
Jan 31 05:55:53 np0005603542.novalocal sshd-session[1619]: Unable to negotiate with 38.102.83.114 port 47276: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 31 05:55:53 np0005603542.novalocal sshd-session[1632]: Unable to negotiate with 38.102.83.114 port 47290: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 31 05:55:53 np0005603542.novalocal sshd-session[1638]: Connection reset by 38.102.83.114 port 47306 [preauth]
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 31 05:55:53 np0005603542.novalocal sshd-session[1664]: Unable to negotiate with 38.102.83.114 port 47328: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 31 05:55:53 np0005603542.novalocal sshd-session[1670]: Unable to negotiate with 38.102.83.114 port 47338: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 31 05:55:53 np0005603542.novalocal sshd-session[1652]: Connection closed by 38.102.83.114 port 47316 [preauth]
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: memstrack is not available
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 05:55:53 np0005603542.novalocal dracut[1268]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 05:55:54 np0005603542.novalocal dracut[1268]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 05:55:54 np0005603542.novalocal dracut[1268]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 05:55:54 np0005603542.novalocal dracut[1268]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 05:55:54 np0005603542.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 05:55:54 np0005603542.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 05:55:54 np0005603542.novalocal dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 05:55:54 np0005603542.novalocal dracut[1268]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 05:55:54 np0005603542.novalocal dracut[1268]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 05:55:54 np0005603542.novalocal dracut[1268]: memstrack is not available
Jan 31 05:55:54 np0005603542.novalocal dracut[1268]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 05:55:54 np0005603542.novalocal dracut[1268]: *** Including module: systemd ***
Jan 31 05:55:54 np0005603542.novalocal dracut[1268]: *** Including module: fips ***
Jan 31 05:55:54 np0005603542.novalocal dracut[1268]: *** Including module: systemd-initrd ***
Jan 31 05:55:54 np0005603542.novalocal dracut[1268]: *** Including module: i18n ***
Jan 31 05:55:54 np0005603542.novalocal dracut[1268]: *** Including module: drm ***
Jan 31 05:55:55 np0005603542.novalocal dracut[1268]: *** Including module: prefixdevname ***
Jan 31 05:55:55 np0005603542.novalocal dracut[1268]: *** Including module: kernel-modules ***
Jan 31 05:55:55 np0005603542.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 31 05:55:55 np0005603542.novalocal chronyd[796]: Selected source 142.4.192.253 (2.centos.pool.ntp.org)
Jan 31 05:55:55 np0005603542.novalocal chronyd[796]: System clock TAI offset set to 37 seconds
Jan 31 05:55:55 np0005603542.novalocal dracut[1268]: *** Including module: kernel-modules-extra ***
Jan 31 05:55:55 np0005603542.novalocal dracut[1268]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 31 05:55:55 np0005603542.novalocal dracut[1268]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 31 05:55:55 np0005603542.novalocal dracut[1268]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 31 05:55:55 np0005603542.novalocal dracut[1268]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 31 05:55:55 np0005603542.novalocal dracut[1268]: *** Including module: qemu ***
Jan 31 05:55:55 np0005603542.novalocal dracut[1268]: *** Including module: fstab-sys ***
Jan 31 05:55:55 np0005603542.novalocal dracut[1268]: *** Including module: rootfs-block ***
Jan 31 05:55:55 np0005603542.novalocal dracut[1268]: *** Including module: terminfo ***
Jan 31 05:55:55 np0005603542.novalocal dracut[1268]: *** Including module: udev-rules ***
Jan 31 05:55:56 np0005603542.novalocal dracut[1268]: Skipping udev rule: 91-permissions.rules
Jan 31 05:55:56 np0005603542.novalocal dracut[1268]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 31 05:55:56 np0005603542.novalocal dracut[1268]: *** Including module: virtiofs ***
Jan 31 05:55:56 np0005603542.novalocal dracut[1268]: *** Including module: dracut-systemd ***
Jan 31 05:55:56 np0005603542.novalocal dracut[1268]: *** Including module: usrmount ***
Jan 31 05:55:56 np0005603542.novalocal dracut[1268]: *** Including module: base ***
Jan 31 05:55:56 np0005603542.novalocal dracut[1268]: *** Including module: fs-lib ***
Jan 31 05:55:56 np0005603542.novalocal dracut[1268]: *** Including module: kdumpbase ***
Jan 31 05:55:56 np0005603542.novalocal dracut[1268]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 31 05:55:56 np0005603542.novalocal dracut[1268]:   microcode_ctl module: mangling fw_dir
Jan 31 05:55:56 np0005603542.novalocal dracut[1268]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 31 05:55:56 np0005603542.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]:     microcode_ctl: configuration "intel" is ignored
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]: *** Including module: openssl ***
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]: *** Including module: shutdown ***
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]: *** Including module: squash ***
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]: *** Including modules done ***
Jan 31 05:55:57 np0005603542.novalocal dracut[1268]: *** Installing kernel module dependencies ***
Jan 31 05:55:58 np0005603542.novalocal dracut[1268]: *** Installing kernel module dependencies done ***
Jan 31 05:55:58 np0005603542.novalocal dracut[1268]: *** Resolving executable dependencies ***
Jan 31 05:55:59 np0005603542.novalocal irqbalance[784]: Cannot change IRQ 35 affinity: Operation not permitted
Jan 31 05:55:59 np0005603542.novalocal irqbalance[784]: IRQ 35 affinity is now unmanaged
Jan 31 05:55:59 np0005603542.novalocal irqbalance[784]: Cannot change IRQ 33 affinity: Operation not permitted
Jan 31 05:55:59 np0005603542.novalocal irqbalance[784]: IRQ 33 affinity is now unmanaged
Jan 31 05:55:59 np0005603542.novalocal irqbalance[784]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 31 05:55:59 np0005603542.novalocal irqbalance[784]: IRQ 31 affinity is now unmanaged
Jan 31 05:55:59 np0005603542.novalocal irqbalance[784]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 31 05:55:59 np0005603542.novalocal irqbalance[784]: IRQ 28 affinity is now unmanaged
Jan 31 05:55:59 np0005603542.novalocal irqbalance[784]: Cannot change IRQ 34 affinity: Operation not permitted
Jan 31 05:55:59 np0005603542.novalocal irqbalance[784]: IRQ 34 affinity is now unmanaged
Jan 31 05:55:59 np0005603542.novalocal irqbalance[784]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 31 05:55:59 np0005603542.novalocal irqbalance[784]: IRQ 32 affinity is now unmanaged
Jan 31 05:55:59 np0005603542.novalocal irqbalance[784]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 31 05:55:59 np0005603542.novalocal irqbalance[784]: IRQ 30 affinity is now unmanaged
Jan 31 05:55:59 np0005603542.novalocal irqbalance[784]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 31 05:55:59 np0005603542.novalocal irqbalance[784]: IRQ 29 affinity is now unmanaged
Jan 31 05:55:59 np0005603542.novalocal dracut[1268]: *** Resolving executable dependencies done ***
Jan 31 05:55:59 np0005603542.novalocal dracut[1268]: *** Generating early-microcode cpio image ***
Jan 31 05:55:59 np0005603542.novalocal dracut[1268]: *** Store current command line parameters ***
Jan 31 05:55:59 np0005603542.novalocal dracut[1268]: Stored kernel commandline:
Jan 31 05:55:59 np0005603542.novalocal dracut[1268]: No dracut internal kernel commandline stored in the initramfs
Jan 31 05:55:59 np0005603542.novalocal dracut[1268]: *** Install squash loader ***
Jan 31 05:56:00 np0005603542.novalocal dracut[1268]: *** Squashing the files inside the initramfs ***
Jan 31 05:56:00 np0005603542.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 05:56:01 np0005603542.novalocal dracut[1268]: *** Squashing the files inside the initramfs done ***
Jan 31 05:56:01 np0005603542.novalocal dracut[1268]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Jan 31 05:56:01 np0005603542.novalocal dracut[1268]: *** Hardlinking files ***
Jan 31 05:56:01 np0005603542.novalocal dracut[1268]: Mode:           real
Jan 31 05:56:01 np0005603542.novalocal dracut[1268]: Files:          50
Jan 31 05:56:01 np0005603542.novalocal dracut[1268]: Linked:         0 files
Jan 31 05:56:01 np0005603542.novalocal dracut[1268]: Compared:       0 xattrs
Jan 31 05:56:01 np0005603542.novalocal dracut[1268]: Compared:       0 files
Jan 31 05:56:01 np0005603542.novalocal dracut[1268]: Saved:          0 B
Jan 31 05:56:01 np0005603542.novalocal dracut[1268]: Duration:       0.000617 seconds
Jan 31 05:56:01 np0005603542.novalocal dracut[1268]: *** Hardlinking files done ***
Jan 31 05:56:02 np0005603542.novalocal dracut[1268]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Jan 31 05:56:02 np0005603542.novalocal kdumpctl[1012]: kdump: kexec: loaded kdump kernel
Jan 31 05:56:02 np0005603542.novalocal kdumpctl[1012]: kdump: Starting kdump: [OK]
Jan 31 05:56:02 np0005603542.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 31 05:56:02 np0005603542.novalocal systemd[1]: Startup finished in 1.289s (kernel) + 2.335s (initrd) + 15.440s (userspace) = 19.066s.
Jan 31 05:56:20 np0005603542.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 05:58:06 np0005603542.novalocal sshd-session[4304]: Accepted publickey for zuul from 38.102.83.114 port 45600 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 31 05:58:06 np0005603542.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 31 05:58:06 np0005603542.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 31 05:58:06 np0005603542.novalocal systemd-logind[788]: New session 1 of user zuul.
Jan 31 05:58:06 np0005603542.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 31 05:58:06 np0005603542.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 31 05:58:06 np0005603542.novalocal systemd[4308]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:58:06 np0005603542.novalocal systemd[4308]: Queued start job for default target Main User Target.
Jan 31 05:58:06 np0005603542.novalocal systemd[4308]: Created slice User Application Slice.
Jan 31 05:58:06 np0005603542.novalocal systemd[4308]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 05:58:06 np0005603542.novalocal systemd[4308]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 05:58:06 np0005603542.novalocal systemd[4308]: Reached target Paths.
Jan 31 05:58:06 np0005603542.novalocal systemd[4308]: Reached target Timers.
Jan 31 05:58:06 np0005603542.novalocal systemd[4308]: Starting D-Bus User Message Bus Socket...
Jan 31 05:58:06 np0005603542.novalocal systemd[4308]: Starting Create User's Volatile Files and Directories...
Jan 31 05:58:06 np0005603542.novalocal systemd[4308]: Listening on D-Bus User Message Bus Socket.
Jan 31 05:58:06 np0005603542.novalocal systemd[4308]: Reached target Sockets.
Jan 31 05:58:06 np0005603542.novalocal systemd[4308]: Finished Create User's Volatile Files and Directories.
Jan 31 05:58:06 np0005603542.novalocal systemd[4308]: Reached target Basic System.
Jan 31 05:58:06 np0005603542.novalocal systemd[4308]: Reached target Main User Target.
Jan 31 05:58:06 np0005603542.novalocal systemd[4308]: Startup finished in 112ms.
Jan 31 05:58:06 np0005603542.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 31 05:58:06 np0005603542.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 31 05:58:06 np0005603542.novalocal sshd-session[4304]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:58:07 np0005603542.novalocal python3[4390]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:58:18 np0005603542.novalocal python3[4418]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:58:26 np0005603542.novalocal python3[4476]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:58:27 np0005603542.novalocal python3[4516]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 31 05:58:29 np0005603542.novalocal python3[4542]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcT1sSKuKtP7Uq0MjNeQVuuPZJ5GQUMajyU1EMc7oKP7OPKur09xKegjfcQuJ1YWEngwomzcSh03o58EEcgHL7twSbbSV/tl19q0h1wtkobuk8zVRYN7tQa/7+Dp5jl1JUbLX1piCb+tuLUgQDdiulTbxlRD4ovZ8WuAKA8vVM7sVyANXJcBRjLRxQdcjys7R20df/sj4ryBJdnPmzVbP4EqMdexQEtCt/8FlC0Ih5W8Z5u3Z9XeqrzpR7MPmKSx2txi89bf82EtuA0X6ZdTxuY6yJSodI2XrTK6TPFQozJ+Qb2JQjFHOiFnKkkIkK/CeG0AQfXMUP/5RHLcPOwZzfDXmDzokfChY+tN1a5ypSxAK/QireQfgbN5UOS4Dj6dH8pdH392T4G8cpNm5P/bExl4G3EOnEbScCZ0h9faJPLV75PCEpymPzxDh7ufyymt/r+VWPlCDQkO3SUOzmgy4p/jCsJcOoEIoUrl7gneWKh/R9DdZ0jOS9uKURThmglcs= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:29 np0005603542.novalocal python3[4566]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:58:30 np0005603542.novalocal python3[4665]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:58:30 np0005603542.novalocal python3[4736]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769839110.1182528-252-267328497296846/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=9705b2ecf4824110a053f3cefa64f45f_id_rsa follow=False checksum=c62d5adb5a9253804fdd8540f659bb7cecfeeed4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:58:31 np0005603542.novalocal python3[4859]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:58:31 np0005603542.novalocal python3[4930]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769839111.0961175-307-163623493962848/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=9705b2ecf4824110a053f3cefa64f45f_id_rsa.pub follow=False checksum=edbba6552c915a7dc3463e232002c18fc71ee9d0 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:58:33 np0005603542.novalocal python3[4978]: ansible-ping Invoked with data=pong
Jan 31 05:58:34 np0005603542.novalocal python3[5002]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:58:36 np0005603542.novalocal python3[5060]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 31 05:58:38 np0005603542.novalocal python3[5092]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:58:38 np0005603542.novalocal python3[5116]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:58:38 np0005603542.novalocal python3[5140]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:58:39 np0005603542.novalocal python3[5164]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:58:39 np0005603542.novalocal python3[5188]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:58:39 np0005603542.novalocal python3[5212]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:58:41 np0005603542.novalocal sudo[5236]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovtqswvczcpueoejqlfkchbtcaoqhvks ; /usr/bin/python3'
Jan 31 05:58:41 np0005603542.novalocal sudo[5236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:41 np0005603542.novalocal python3[5238]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:58:41 np0005603542.novalocal sudo[5236]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:41 np0005603542.novalocal sudo[5314]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zitqheqhxmyzjxhxeryojzqdzagwxtfg ; /usr/bin/python3'
Jan 31 05:58:41 np0005603542.novalocal sudo[5314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:41 np0005603542.novalocal python3[5316]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:58:41 np0005603542.novalocal sudo[5314]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:42 np0005603542.novalocal sudo[5387]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxidrlbtmcaoefcvnammvrgnvscmpbca ; /usr/bin/python3'
Jan 31 05:58:42 np0005603542.novalocal sudo[5387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:42 np0005603542.novalocal python3[5389]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839121.4587574-33-163747761768322/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:58:42 np0005603542.novalocal sudo[5387]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:43 np0005603542.novalocal python3[5437]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:43 np0005603542.novalocal python3[5461]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:43 np0005603542.novalocal python3[5485]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:43 np0005603542.novalocal python3[5509]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:44 np0005603542.novalocal python3[5533]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:44 np0005603542.novalocal python3[5557]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:44 np0005603542.novalocal python3[5581]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:44 np0005603542.novalocal python3[5605]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:45 np0005603542.novalocal python3[5629]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:45 np0005603542.novalocal python3[5653]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:45 np0005603542.novalocal python3[5677]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:45 np0005603542.novalocal python3[5701]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:46 np0005603542.novalocal python3[5725]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:46 np0005603542.novalocal python3[5749]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:46 np0005603542.novalocal python3[5773]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:46 np0005603542.novalocal python3[5797]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:47 np0005603542.novalocal python3[5821]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:47 np0005603542.novalocal python3[5845]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:47 np0005603542.novalocal python3[5869]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:47 np0005603542.novalocal python3[5893]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:48 np0005603542.novalocal python3[5917]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:48 np0005603542.novalocal python3[5941]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:48 np0005603542.novalocal python3[5965]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:48 np0005603542.novalocal python3[5989]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:49 np0005603542.novalocal python3[6013]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:49 np0005603542.novalocal python3[6037]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:58:52 np0005603542.novalocal sudo[6061]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pohleqfejsciozgacumkwcdpufhhewss ; /usr/bin/python3'
Jan 31 05:58:52 np0005603542.novalocal sudo[6061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:52 np0005603542.novalocal python3[6063]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 05:58:52 np0005603542.novalocal systemd[1]: Starting Time & Date Service...
Jan 31 05:58:52 np0005603542.novalocal systemd[1]: Started Time & Date Service.
Jan 31 05:58:52 np0005603542.novalocal systemd-timedated[6065]: Changed time zone to 'UTC' (UTC).
Jan 31 05:58:52 np0005603542.novalocal sudo[6061]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:53 np0005603542.novalocal sudo[6092]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhhtnlgtdampluaagghbwvdntwuisemy ; /usr/bin/python3'
Jan 31 05:58:53 np0005603542.novalocal sudo[6092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:53 np0005603542.novalocal python3[6094]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:58:53 np0005603542.novalocal sudo[6092]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:53 np0005603542.novalocal python3[6170]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:58:53 np0005603542.novalocal python3[6241]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769839133.4062703-253-94084500236710/source _original_basename=tmpjrg1w5vp follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:58:54 np0005603542.novalocal python3[6341]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:58:54 np0005603542.novalocal python3[6412]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769839134.1655617-302-175797379360898/source _original_basename=tmprpzf6e9v follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:58:55 np0005603542.novalocal sudo[6512]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjsbdyathqzfrorkeeewsygxnxykhuns ; /usr/bin/python3'
Jan 31 05:58:55 np0005603542.novalocal sudo[6512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:55 np0005603542.novalocal python3[6514]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:58:55 np0005603542.novalocal sudo[6512]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:55 np0005603542.novalocal sudo[6585]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrarjnmyborgforfybgyhwevjtzvaaup ; /usr/bin/python3'
Jan 31 05:58:55 np0005603542.novalocal sudo[6585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:55 np0005603542.novalocal python3[6587]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769839135.2421594-382-90537779513302/source _original_basename=tmp1ilekmi6 follow=False checksum=ef41ffc2d4a8b9f73488a75ae66bcded72c1a415 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:58:55 np0005603542.novalocal sudo[6585]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:56 np0005603542.novalocal python3[6635]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:58:56 np0005603542.novalocal python3[6661]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:58:57 np0005603542.novalocal sudo[6739]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huyqzullpxourvrvmhuphgrsviaznqge ; /usr/bin/python3'
Jan 31 05:58:57 np0005603542.novalocal sudo[6739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:57 np0005603542.novalocal python3[6741]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:58:57 np0005603542.novalocal sudo[6739]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:57 np0005603542.novalocal sudo[6812]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aekbgfjrxmydeufshdcnjgkolwadyylx ; /usr/bin/python3'
Jan 31 05:58:57 np0005603542.novalocal sudo[6812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:57 np0005603542.novalocal python3[6814]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839136.912489-453-127052212269131/source _original_basename=tmpbkf6__ws follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:58:57 np0005603542.novalocal sudo[6812]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:57 np0005603542.novalocal sudo[6863]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zabapqqmxbcqidwmmobwbkqvujkzdtuo ; /usr/bin/python3'
Jan 31 05:58:57 np0005603542.novalocal sudo[6863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:58 np0005603542.novalocal python3[6865]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-fbab-701a-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:58:58 np0005603542.novalocal sudo[6863]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:58 np0005603542.novalocal python3[6893]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-fbab-701a-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 31 05:59:00 np0005603542.novalocal python3[6921]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:59:22 np0005603542.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 05:59:36 np0005603542.novalocal sudo[6947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdngdsqltrvxufnrrewlibtxuefavnqx ; /usr/bin/python3'
Jan 31 05:59:36 np0005603542.novalocal sudo[6947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:36 np0005603542.novalocal python3[6949]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:59:36 np0005603542.novalocal sudo[6947]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:26 np0005603542.novalocal systemd[4308]: Starting Mark boot as successful...
Jan 31 06:00:26 np0005603542.novalocal systemd[4308]: Finished Mark boot as successful.
Jan 31 06:00:36 np0005603542.novalocal sshd-session[4317]: Received disconnect from 38.102.83.114 port 45600:11: disconnected by user
Jan 31 06:00:36 np0005603542.novalocal sshd-session[4317]: Disconnected from user zuul 38.102.83.114 port 45600
Jan 31 06:00:36 np0005603542.novalocal sshd-session[4304]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:00:36 np0005603542.novalocal systemd-logind[788]: Session 1 logged out. Waiting for processes to exit.
Jan 31 06:00:47 np0005603542.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 06:00:47 np0005603542.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 31 06:00:47 np0005603542.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 31 06:00:47 np0005603542.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 31 06:00:47 np0005603542.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 31 06:00:47 np0005603542.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 31 06:00:47 np0005603542.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 31 06:00:47 np0005603542.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 31 06:00:47 np0005603542.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 31 06:00:47 np0005603542.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 31 06:00:47 np0005603542.novalocal NetworkManager[857]: <info>  [1769839247.9072] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 06:00:47 np0005603542.novalocal systemd-udevd[6952]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 06:00:47 np0005603542.novalocal NetworkManager[857]: <info>  [1769839247.9228] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 06:00:47 np0005603542.novalocal NetworkManager[857]: <info>  [1769839247.9265] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 31 06:00:47 np0005603542.novalocal NetworkManager[857]: <info>  [1769839247.9272] device (eth1): carrier: link connected
Jan 31 06:00:47 np0005603542.novalocal NetworkManager[857]: <info>  [1769839247.9275] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 06:00:47 np0005603542.novalocal NetworkManager[857]: <info>  [1769839247.9284] policy: auto-activating connection 'Wired connection 1' (5ea74ddd-8858-33a0-ae1a-8a96f64e8d31)
Jan 31 06:00:47 np0005603542.novalocal NetworkManager[857]: <info>  [1769839247.9292] device (eth1): Activation: starting connection 'Wired connection 1' (5ea74ddd-8858-33a0-ae1a-8a96f64e8d31)
Jan 31 06:00:47 np0005603542.novalocal NetworkManager[857]: <info>  [1769839247.9293] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:00:47 np0005603542.novalocal NetworkManager[857]: <info>  [1769839247.9300] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:00:47 np0005603542.novalocal NetworkManager[857]: <info>  [1769839247.9307] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:00:47 np0005603542.novalocal NetworkManager[857]: <info>  [1769839247.9317] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 06:00:48 np0005603542.novalocal sshd-session[6955]: Accepted publickey for zuul from 38.102.83.114 port 37402 ssh2: RSA SHA256:aLVmJr8JWWHWS0llpJoB9Gsrlwh5Xj4FMq/6l3bp+3U
Jan 31 06:00:48 np0005603542.novalocal systemd-logind[788]: New session 3 of user zuul.
Jan 31 06:00:48 np0005603542.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 31 06:00:48 np0005603542.novalocal sshd-session[6955]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:00:49 np0005603542.novalocal python3[6982]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-d406-dc6a-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:00:58 np0005603542.novalocal sudo[7060]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cplyoijweifdirgmswzdyonfuyxhmxkx ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 06:00:58 np0005603542.novalocal sudo[7060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:59 np0005603542.novalocal python3[7062]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:00:59 np0005603542.novalocal sudo[7060]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:59 np0005603542.novalocal sudo[7133]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gboojiekqbmupgjvpfspjurlrkpkjwoq ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 06:00:59 np0005603542.novalocal sudo[7133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:59 np0005603542.novalocal python3[7135]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769839258.8560226-155-174802194949450/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=12c25624e3f0107b0266739ca4f84969582a1320 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:00:59 np0005603542.novalocal sudo[7133]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:59 np0005603542.novalocal sudo[7183]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eotxctxmctdcyuzrwkucosgzbpzeizyi ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 06:00:59 np0005603542.novalocal sudo[7183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:00 np0005603542.novalocal python3[7185]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:01:00 np0005603542.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 06:01:00 np0005603542.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 31 06:01:00 np0005603542.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 31 06:01:00 np0005603542.novalocal systemd[1]: Stopping Network Manager...
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[857]: <info>  [1769839260.0794] caught SIGTERM, shutting down normally.
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[857]: <info>  [1769839260.0805] dhcp4 (eth0): canceled DHCP transaction
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[857]: <info>  [1769839260.0805] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[857]: <info>  [1769839260.0805] dhcp4 (eth0): state changed no lease
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[857]: <info>  [1769839260.0809] manager: NetworkManager state is now CONNECTING
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[857]: <info>  [1769839260.0959] dhcp4 (eth1): canceled DHCP transaction
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[857]: <info>  [1769839260.0959] dhcp4 (eth1): state changed no lease
Jan 31 06:01:00 np0005603542.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[857]: <info>  [1769839260.1019] exiting (success)
Jan 31 06:01:00 np0005603542.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 06:01:00 np0005603542.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 06:01:00 np0005603542.novalocal systemd[1]: Stopped Network Manager.
Jan 31 06:01:00 np0005603542.novalocal systemd[1]: NetworkManager.service: Consumed 2.316s CPU time, 10.3M memory peak.
Jan 31 06:01:00 np0005603542.novalocal systemd[1]: Starting Network Manager...
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.1758] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:0f1e2a66-84e3-44e9-8fdb-17905db9d508)
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.1761] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.1816] manager[0x5560281a2000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 06:01:00 np0005603542.novalocal systemd[1]: Starting Hostname Service...
Jan 31 06:01:00 np0005603542.novalocal systemd[1]: Started Hostname Service.
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2518] hostname: hostname: using hostnamed
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2518] hostname: static hostname changed from (none) to "np0005603542.novalocal"
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2525] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2529] manager[0x5560281a2000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2529] manager[0x5560281a2000]: rfkill: WWAN hardware radio set enabled
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2556] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2557] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2557] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2557] manager: Networking is enabled by state file
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2560] settings: Loaded settings plugin: keyfile (internal)
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2565] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2607] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2620] dhcp: init: Using DHCP client 'internal'
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2623] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2633] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2643] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2657] device (lo): Activation: starting connection 'lo' (12701cd6-486a-4822-b29d-5b142e7d4428)
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2668] device (eth0): carrier: link connected
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2680] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2689] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2689] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2696] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2703] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2708] device (eth1): carrier: link connected
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2711] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2716] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (5ea74ddd-8858-33a0-ae1a-8a96f64e8d31) (indicated)
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2717] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2721] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2728] device (eth1): Activation: starting connection 'Wired connection 1' (5ea74ddd-8858-33a0-ae1a-8a96f64e8d31)
Jan 31 06:01:00 np0005603542.novalocal systemd[1]: Started Network Manager.
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2734] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2740] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2742] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2744] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2747] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2750] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2754] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2757] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2762] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2769] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2773] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2781] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2784] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2794] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2798] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 06:01:00 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839260.2803] device (lo): Activation: successful, device activated.
Jan 31 06:01:00 np0005603542.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 31 06:01:00 np0005603542.novalocal sudo[7183]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:00 np0005603542.novalocal python3[7251]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-d406-dc6a-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:01:01 np0005603542.novalocal CROND[7255]: (root) CMD (run-parts /etc/cron.hourly)
Jan 31 06:01:01 np0005603542.novalocal run-parts[7258]: (/etc/cron.hourly) starting 0anacron
Jan 31 06:01:01 np0005603542.novalocal anacron[7266]: Anacron started on 2026-01-31
Jan 31 06:01:01 np0005603542.novalocal anacron[7266]: Will run job `cron.daily' in 29 min.
Jan 31 06:01:01 np0005603542.novalocal anacron[7266]: Will run job `cron.weekly' in 49 min.
Jan 31 06:01:01 np0005603542.novalocal anacron[7266]: Will run job `cron.monthly' in 69 min.
Jan 31 06:01:01 np0005603542.novalocal anacron[7266]: Jobs will be executed sequentially
Jan 31 06:01:01 np0005603542.novalocal run-parts[7268]: (/etc/cron.hourly) finished 0anacron
Jan 31 06:01:01 np0005603542.novalocal CROND[7254]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 31 06:01:05 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839265.3031] dhcp4 (eth0): state changed new lease, address=38.102.83.128
Jan 31 06:01:05 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839265.3044] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 06:01:05 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839265.4483] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 06:01:05 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839265.4509] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 06:01:05 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839265.4511] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 06:01:05 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839265.4516] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 06:01:05 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839265.4519] device (eth0): Activation: successful, device activated.
Jan 31 06:01:05 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839265.4524] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 06:01:05 np0005603542.novalocal sshd-session[7269]: Received disconnect from 45.148.10.152 port 21366:11:  [preauth]
Jan 31 06:01:05 np0005603542.novalocal sshd-session[7269]: Disconnected from authenticating user root 45.148.10.152 port 21366 [preauth]
Jan 31 06:01:15 np0005603542.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 06:01:30 np0005603542.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 06:01:45 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839305.5760] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 06:01:45 np0005603542.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 06:01:45 np0005603542.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 06:01:45 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839305.6149] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 06:01:45 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839305.6156] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 06:01:45 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839305.6173] device (eth1): Activation: successful, device activated.
Jan 31 06:01:45 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839305.6186] manager: startup complete
Jan 31 06:01:45 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839305.6190] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 31 06:01:45 np0005603542.novalocal NetworkManager[7198]: <warn>  [1769839305.6206] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 31 06:01:45 np0005603542.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 31 06:01:45 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839305.6222] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 31 06:01:45 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839305.6339] dhcp4 (eth1): canceled DHCP transaction
Jan 31 06:01:45 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839305.6340] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 06:01:45 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839305.6340] dhcp4 (eth1): state changed no lease
Jan 31 06:01:45 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839305.6363] policy: auto-activating connection 'ci-private-network' (5b3e436a-1bd9-50ca-b46e-b28c26ac73b3)
Jan 31 06:01:45 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839305.6372] device (eth1): Activation: starting connection 'ci-private-network' (5b3e436a-1bd9-50ca-b46e-b28c26ac73b3)
Jan 31 06:01:45 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839305.6374] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:01:45 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839305.6384] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:01:45 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839305.6400] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:01:45 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839305.6415] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 06:01:45 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839305.7210] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 06:01:45 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839305.7213] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 06:01:45 np0005603542.novalocal NetworkManager[7198]: <info>  [1769839305.7221] device (eth1): Activation: successful, device activated.
Jan 31 06:01:55 np0005603542.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 06:02:00 np0005603542.novalocal sshd-session[6958]: Received disconnect from 38.102.83.114 port 37402:11: disconnected by user
Jan 31 06:02:00 np0005603542.novalocal sshd-session[6958]: Disconnected from user zuul 38.102.83.114 port 37402
Jan 31 06:02:00 np0005603542.novalocal sshd-session[6955]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:02:00 np0005603542.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 31 06:02:00 np0005603542.novalocal systemd[1]: session-3.scope: Consumed 1.414s CPU time.
Jan 31 06:02:00 np0005603542.novalocal systemd-logind[788]: Session 3 logged out. Waiting for processes to exit.
Jan 31 06:02:00 np0005603542.novalocal systemd-logind[788]: Removed session 3.
Jan 31 06:02:49 np0005603542.novalocal sshd-session[7317]: Accepted publickey for zuul from 38.102.83.114 port 51218 ssh2: RSA SHA256:aLVmJr8JWWHWS0llpJoB9Gsrlwh5Xj4FMq/6l3bp+3U
Jan 31 06:02:49 np0005603542.novalocal systemd-logind[788]: New session 4 of user zuul.
Jan 31 06:02:49 np0005603542.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 31 06:02:49 np0005603542.novalocal sshd-session[7317]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:02:49 np0005603542.novalocal sudo[7396]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvrniqlgoopupiaplsfickihnszrdtrm ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 06:02:49 np0005603542.novalocal sudo[7396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:49 np0005603542.novalocal python3[7398]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:02:49 np0005603542.novalocal sudo[7396]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:49 np0005603542.novalocal sudo[7469]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcbnqcpeccttxrysvmjeiuukokcbzlmv ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 06:02:49 np0005603542.novalocal sudo[7469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:50 np0005603542.novalocal python3[7471]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839369.2450745-373-19562618847621/source _original_basename=tmpboh77ke9 follow=False checksum=ae76df2b21206ed64f24b43f0a068022f10b8b37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:50 np0005603542.novalocal sudo[7469]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:53 np0005603542.novalocal sshd-session[7320]: Connection closed by 38.102.83.114 port 51218
Jan 31 06:02:53 np0005603542.novalocal sshd-session[7317]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:02:53 np0005603542.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 31 06:02:53 np0005603542.novalocal systemd-logind[788]: Session 4 logged out. Waiting for processes to exit.
Jan 31 06:02:53 np0005603542.novalocal systemd-logind[788]: Removed session 4.
Jan 31 06:03:26 np0005603542.novalocal systemd[4308]: Created slice User Background Tasks Slice.
Jan 31 06:03:26 np0005603542.novalocal systemd[4308]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 06:03:26 np0005603542.novalocal systemd[4308]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 06:04:07 np0005603542.novalocal sshd-session[7499]: Received disconnect from 36.111.150.151 port 52864:11:  [preauth]
Jan 31 06:04:07 np0005603542.novalocal sshd-session[7499]: Disconnected from authenticating user root 36.111.150.151 port 52864 [preauth]
Jan 31 06:07:29 np0005603542.novalocal sshd-session[7502]: Received disconnect from 45.148.10.157 port 58564:11:  [preauth]
Jan 31 06:07:29 np0005603542.novalocal sshd-session[7502]: Disconnected from authenticating user root 45.148.10.157 port 58564 [preauth]
Jan 31 06:09:26 np0005603542.novalocal systemd[1]: Starting dnf makecache...
Jan 31 06:09:26 np0005603542.novalocal dnf[7504]: Failed determining last makecache time.
Jan 31 06:09:26 np0005603542.novalocal dnf[7504]: CentOS Stream 9 - BaseOS                         22 kB/s | 6.1 kB     00:00
Jan 31 06:09:27 np0005603542.novalocal dnf[7504]: CentOS Stream 9 - AppStream                      57 kB/s | 6.5 kB     00:00
Jan 31 06:09:28 np0005603542.novalocal dnf[7504]: CentOS Stream 9 - CRB                            53 kB/s | 6.0 kB     00:00
Jan 31 06:09:28 np0005603542.novalocal dnf[7504]: CentOS Stream 9 - Extras packages                68 kB/s | 7.3 kB     00:00
Jan 31 06:09:28 np0005603542.novalocal dnf[7504]: Metadata cache created.
Jan 31 06:09:28 np0005603542.novalocal systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 31 06:09:28 np0005603542.novalocal systemd[1]: Finished dnf makecache.
Jan 31 06:09:57 np0005603542.novalocal sshd-session[7513]: Invalid user pi from 178.201.162.195 port 62177
Jan 31 06:09:57 np0005603542.novalocal sshd-session[7513]: Connection closed by invalid user pi 178.201.162.195 port 62177 [preauth]
Jan 31 06:09:57 np0005603542.novalocal sshd-session[7515]: Invalid user pi from 178.201.162.195 port 62424
Jan 31 06:09:57 np0005603542.novalocal sshd-session[7515]: Connection closed by invalid user pi 178.201.162.195 port 62424 [preauth]
Jan 31 06:11:23 np0005603542.novalocal sshd-session[7520]: Accepted publickey for zuul from 38.102.83.114 port 44772 ssh2: RSA SHA256:aLVmJr8JWWHWS0llpJoB9Gsrlwh5Xj4FMq/6l3bp+3U
Jan 31 06:11:23 np0005603542.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Jan 31 06:11:23 np0005603542.novalocal systemd-logind[788]: New session 5 of user zuul.
Jan 31 06:11:23 np0005603542.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 31 06:11:23 np0005603542.novalocal sshd-session[7520]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:11:23 np0005603542.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 31 06:11:23 np0005603542.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Jan 31 06:11:23 np0005603542.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 31 06:11:23 np0005603542.novalocal sudo[7550]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgbpaouukbdsaldpbnwnxrpgztvmygyi ; /usr/bin/python3'
Jan 31 06:11:23 np0005603542.novalocal sudo[7550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:24 np0005603542.novalocal python3[7552]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-bc1d-b10c-000000000cb4-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:11:24 np0005603542.novalocal sudo[7550]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:24 np0005603542.novalocal sudo[7579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdqeqfmvxprzvxytuzzrllmrclrrntdm ; /usr/bin/python3'
Jan 31 06:11:24 np0005603542.novalocal sudo[7579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:24 np0005603542.novalocal python3[7581]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:24 np0005603542.novalocal sudo[7579]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:24 np0005603542.novalocal sudo[7605]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfzltweezefgvbbeuwdmywresewggyqj ; /usr/bin/python3'
Jan 31 06:11:24 np0005603542.novalocal sudo[7605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:24 np0005603542.novalocal python3[7607]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:24 np0005603542.novalocal sudo[7605]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:24 np0005603542.novalocal sudo[7631]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emnmzrgatiucpftirxavyroygshryfog ; /usr/bin/python3'
Jan 31 06:11:24 np0005603542.novalocal sudo[7631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:24 np0005603542.novalocal python3[7633]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:24 np0005603542.novalocal sudo[7631]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:24 np0005603542.novalocal sudo[7657]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wadzwwfhjahurdviwtnpbwrcmcarwglc ; /usr/bin/python3'
Jan 31 06:11:24 np0005603542.novalocal sudo[7657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:25 np0005603542.novalocal python3[7659]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:25 np0005603542.novalocal sudo[7657]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:25 np0005603542.novalocal sudo[7683]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inchjawhatigosjywdqkhsuvjfshvdhk ; /usr/bin/python3'
Jan 31 06:11:25 np0005603542.novalocal sudo[7683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:25 np0005603542.novalocal python3[7685]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:25 np0005603542.novalocal sudo[7683]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:25 np0005603542.novalocal sudo[7761]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcsyjrtlkvbsymjzkldwgytnbjpnndcr ; /usr/bin/python3'
Jan 31 06:11:25 np0005603542.novalocal sudo[7761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:26 np0005603542.novalocal python3[7763]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:11:26 np0005603542.novalocal sudo[7761]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:26 np0005603542.novalocal sudo[7834]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofjlzvbmmgnzqjzfwfuklnhimgdavnvv ; /usr/bin/python3'
Jan 31 06:11:26 np0005603542.novalocal sudo[7834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:26 np0005603542.novalocal python3[7836]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839885.8228045-375-144636357710760/source _original_basename=tmp2t4rue7f follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:26 np0005603542.novalocal sudo[7834]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:27 np0005603542.novalocal sudo[7884]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqfokjbagwqnennrfiepvvxcytytpgst ; /usr/bin/python3'
Jan 31 06:11:27 np0005603542.novalocal sudo[7884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:27 np0005603542.novalocal python3[7886]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 06:11:27 np0005603542.novalocal systemd[1]: Reloading.
Jan 31 06:11:27 np0005603542.novalocal systemd-rc-local-generator[7904]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:11:27 np0005603542.novalocal sudo[7884]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:28 np0005603542.novalocal sudo[7940]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqbqlddwrbahlehrguqxknyehtrdaluz ; /usr/bin/python3'
Jan 31 06:11:28 np0005603542.novalocal sudo[7940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:29 np0005603542.novalocal python3[7942]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 31 06:11:29 np0005603542.novalocal sudo[7940]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:29 np0005603542.novalocal sudo[7966]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diiqlvsmtquleticzjlfngdkjqgyhtyo ; /usr/bin/python3'
Jan 31 06:11:29 np0005603542.novalocal sudo[7966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:29 np0005603542.novalocal python3[7968]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:11:29 np0005603542.novalocal sudo[7966]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:29 np0005603542.novalocal sudo[7994]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgnbmatfvvpojftqmsxpargizegcywey ; /usr/bin/python3'
Jan 31 06:11:29 np0005603542.novalocal sudo[7994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:30 np0005603542.novalocal python3[7996]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:11:30 np0005603542.novalocal sudo[7994]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:30 np0005603542.novalocal sudo[8022]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dljzpfgydwmlolttyusmibxnsxrzplom ; /usr/bin/python3'
Jan 31 06:11:30 np0005603542.novalocal sudo[8022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:30 np0005603542.novalocal python3[8024]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:11:30 np0005603542.novalocal sudo[8022]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:30 np0005603542.novalocal sudo[8050]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsfpwvhectawerjjsrkncxdstwnchrae ; /usr/bin/python3'
Jan 31 06:11:30 np0005603542.novalocal sudo[8050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:30 np0005603542.novalocal python3[8052]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:11:30 np0005603542.novalocal sudo[8050]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:31 np0005603542.novalocal python3[8079]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-bc1d-b10c-000000000cbb-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:11:31 np0005603542.novalocal python3[8109]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 06:11:34 np0005603542.novalocal sshd-session[7525]: Connection closed by 38.102.83.114 port 44772
Jan 31 06:11:34 np0005603542.novalocal sshd-session[7520]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:11:34 np0005603542.novalocal systemd-logind[788]: Session 5 logged out. Waiting for processes to exit.
Jan 31 06:11:34 np0005603542.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 31 06:11:34 np0005603542.novalocal systemd[1]: session-5.scope: Consumed 3.234s CPU time.
Jan 31 06:11:34 np0005603542.novalocal systemd-logind[788]: Removed session 5.
Jan 31 06:11:36 np0005603542.novalocal sshd-session[8113]: Accepted publickey for zuul from 38.102.83.114 port 57040 ssh2: RSA SHA256:aLVmJr8JWWHWS0llpJoB9Gsrlwh5Xj4FMq/6l3bp+3U
Jan 31 06:11:36 np0005603542.novalocal systemd-logind[788]: New session 6 of user zuul.
Jan 31 06:11:36 np0005603542.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 31 06:11:36 np0005603542.novalocal sshd-session[8113]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:11:36 np0005603542.novalocal sudo[8140]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwxvknqiwzazfdsoinmmkyrtrhoggwnx ; /usr/bin/python3'
Jan 31 06:11:36 np0005603542.novalocal sudo[8140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:37 np0005603542.novalocal python3[8142]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 06:11:57 np0005603542.novalocal setsebool[8178]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 31 06:11:57 np0005603542.novalocal setsebool[8178]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 31 06:12:22 np0005603542.novalocal kernel: SELinux:  Converting 386 SID table entries...
Jan 31 06:12:22 np0005603542.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 06:12:22 np0005603542.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 31 06:12:22 np0005603542.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 06:12:22 np0005603542.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 31 06:12:22 np0005603542.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 06:12:22 np0005603542.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 06:12:22 np0005603542.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 06:12:39 np0005603542.novalocal kernel: SELinux:  Converting 389 SID table entries...
Jan 31 06:12:39 np0005603542.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 06:12:39 np0005603542.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 31 06:12:39 np0005603542.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 06:12:39 np0005603542.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 31 06:12:39 np0005603542.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 06:12:39 np0005603542.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 06:12:39 np0005603542.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 06:13:06 np0005603542.novalocal dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 06:13:06 np0005603542.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 06:13:06 np0005603542.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 31 06:13:06 np0005603542.novalocal systemd[1]: Reloading.
Jan 31 06:13:06 np0005603542.novalocal systemd-rc-local-generator[8946]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:13:06 np0005603542.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 06:13:09 np0005603542.novalocal sudo[8140]: pam_unix(sudo:session): session closed for user root
Jan 31 06:13:12 np0005603542.novalocal python3[12386]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-f5b1-e33a-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:13:13 np0005603542.novalocal kernel: evm: overlay not supported
Jan 31 06:13:13 np0005603542.novalocal systemd[4308]: Starting D-Bus User Message Bus...
Jan 31 06:13:13 np0005603542.novalocal dbus-broker-launch[13315]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 31 06:13:13 np0005603542.novalocal dbus-broker-launch[13315]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 31 06:13:13 np0005603542.novalocal systemd[4308]: Started D-Bus User Message Bus.
Jan 31 06:13:13 np0005603542.novalocal dbus-broker-lau[13315]: Ready
Jan 31 06:13:13 np0005603542.novalocal systemd[4308]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 06:13:13 np0005603542.novalocal systemd[4308]: Created slice Slice /user.
Jan 31 06:13:13 np0005603542.novalocal systemd[4308]: podman-13090.scope: unit configures an IP firewall, but not running as root.
Jan 31 06:13:13 np0005603542.novalocal systemd[4308]: (This warning is only shown for the first unit using IP firewalling.)
Jan 31 06:13:13 np0005603542.novalocal systemd[4308]: Started podman-13090.scope.
Jan 31 06:13:14 np0005603542.novalocal systemd[4308]: Started podman-pause-f5568979.scope.
Jan 31 06:13:14 np0005603542.novalocal sudo[13956]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gblzjnbatdflujgarheqnkeezxfpaiot ; /usr/bin/python3'
Jan 31 06:13:14 np0005603542.novalocal sudo[13956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:13:15 np0005603542.novalocal python3[13958]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.176:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.176:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:13:15 np0005603542.novalocal python3[13958]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 31 06:13:15 np0005603542.novalocal sudo[13956]: pam_unix(sudo:session): session closed for user root
Jan 31 06:13:15 np0005603542.novalocal sshd-session[8116]: Connection closed by 38.102.83.114 port 57040
Jan 31 06:13:15 np0005603542.novalocal sshd-session[8113]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:13:15 np0005603542.novalocal systemd-logind[788]: Session 6 logged out. Waiting for processes to exit.
Jan 31 06:13:15 np0005603542.novalocal systemd[1]: session-6.scope: Deactivated successfully.
Jan 31 06:13:15 np0005603542.novalocal systemd[1]: session-6.scope: Consumed 41.297s CPU time.
Jan 31 06:13:15 np0005603542.novalocal systemd-logind[788]: Removed session 6.
Jan 31 06:13:39 np0005603542.novalocal sshd-session[24042]: Connection closed by 38.102.83.142 port 40288 [preauth]
Jan 31 06:13:39 np0005603542.novalocal sshd-session[24039]: Connection closed by 38.102.83.142 port 40284 [preauth]
Jan 31 06:13:39 np0005603542.novalocal sshd-session[24043]: Unable to negotiate with 38.102.83.142 port 40302: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 31 06:13:39 np0005603542.novalocal sshd-session[24045]: Unable to negotiate with 38.102.83.142 port 40324: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 31 06:13:39 np0005603542.novalocal sshd-session[24046]: Unable to negotiate with 38.102.83.142 port 40314: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 31 06:13:44 np0005603542.novalocal sshd-session[25523]: Accepted publickey for zuul from 38.102.83.114 port 37650 ssh2: RSA SHA256:aLVmJr8JWWHWS0llpJoB9Gsrlwh5Xj4FMq/6l3bp+3U
Jan 31 06:13:44 np0005603542.novalocal systemd-logind[788]: New session 7 of user zuul.
Jan 31 06:13:44 np0005603542.novalocal systemd[1]: Started Session 7 of User zuul.
Jan 31 06:13:44 np0005603542.novalocal sshd-session[25523]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:13:44 np0005603542.novalocal python3[25619]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBME4VRiBj81SlD+Feqra7xHOQdZ6SjffJb1Ubgqnfr2PHexEvijEi73vxjVZmMQvndbvXasgSnaxdvDPltqI2Ys= zuul@np0005603540.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:13:45 np0005603542.novalocal sudo[25768]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjyqulbrrvncprghnfppkkeslqeefogs ; /usr/bin/python3'
Jan 31 06:13:45 np0005603542.novalocal sudo[25768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:13:45 np0005603542.novalocal python3[25783]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBME4VRiBj81SlD+Feqra7xHOQdZ6SjffJb1Ubgqnfr2PHexEvijEi73vxjVZmMQvndbvXasgSnaxdvDPltqI2Ys= zuul@np0005603540.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:13:45 np0005603542.novalocal sudo[25768]: pam_unix(sudo:session): session closed for user root
Jan 31 06:13:46 np0005603542.novalocal sudo[26015]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyxtoujgarguzlbjpqcmlxfglrlagjvs ; /usr/bin/python3'
Jan 31 06:13:46 np0005603542.novalocal sudo[26015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:13:46 np0005603542.novalocal sshd-session[25880]: Received disconnect from 45.148.10.152 port 53478:11:  [preauth]
Jan 31 06:13:46 np0005603542.novalocal sshd-session[25880]: Disconnected from authenticating user root 45.148.10.152 port 53478 [preauth]
Jan 31 06:13:46 np0005603542.novalocal python3[26023]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005603542.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 31 06:13:46 np0005603542.novalocal useradd[26110]: new group: name=cloud-admin, GID=1002
Jan 31 06:13:46 np0005603542.novalocal useradd[26110]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 31 06:13:47 np0005603542.novalocal sudo[26015]: pam_unix(sudo:session): session closed for user root
Jan 31 06:13:48 np0005603542.novalocal sudo[26609]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrkincmpmblrglihlujmmlhejbnaksxu ; /usr/bin/python3'
Jan 31 06:13:48 np0005603542.novalocal sudo[26609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:13:48 np0005603542.novalocal python3[26615]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBME4VRiBj81SlD+Feqra7xHOQdZ6SjffJb1Ubgqnfr2PHexEvijEi73vxjVZmMQvndbvXasgSnaxdvDPltqI2Ys= zuul@np0005603540.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:13:48 np0005603542.novalocal sudo[26609]: pam_unix(sudo:session): session closed for user root
Jan 31 06:13:48 np0005603542.novalocal sudo[26824]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsyrlbfylrxrzzchhujsaaaslycgawnh ; /usr/bin/python3'
Jan 31 06:13:48 np0005603542.novalocal sudo[26824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:13:48 np0005603542.novalocal python3[26834]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:13:48 np0005603542.novalocal sudo[26824]: pam_unix(sudo:session): session closed for user root
Jan 31 06:13:48 np0005603542.novalocal sudo[27087]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maxvajqetalensclaotcppqjsradxuby ; /usr/bin/python3'
Jan 31 06:13:48 np0005603542.novalocal sudo[27087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:13:49 np0005603542.novalocal python3[27091]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769840028.3848383-168-211625194597953/source _original_basename=tmp7l6l5vz4 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:13:49 np0005603542.novalocal sudo[27087]: pam_unix(sudo:session): session closed for user root
Jan 31 06:13:49 np0005603542.novalocal sudo[27360]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpqxntzcpldhsogdlnlxkntnaqhuqpqs ; /usr/bin/python3'
Jan 31 06:13:49 np0005603542.novalocal sudo[27360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:13:50 np0005603542.novalocal python3[27369]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Jan 31 06:13:50 np0005603542.novalocal systemd[1]: Starting Hostname Service...
Jan 31 06:13:50 np0005603542.novalocal systemd[1]: Started Hostname Service.
Jan 31 06:13:50 np0005603542.novalocal systemd-hostnamed[27462]: Changed pretty hostname to 'compute-1'
Jan 31 06:13:50 compute-1 systemd-hostnamed[27462]: Hostname set to <compute-1> (static)
Jan 31 06:13:50 compute-1 NetworkManager[7198]: <info>  [1769840030.1891] hostname: static hostname changed from "np0005603542.novalocal" to "compute-1"
Jan 31 06:13:50 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 06:13:50 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 06:13:50 compute-1 sudo[27360]: pam_unix(sudo:session): session closed for user root
Jan 31 06:13:50 compute-1 sshd-session[25566]: Connection closed by 38.102.83.114 port 37650
Jan 31 06:13:50 compute-1 sshd-session[25523]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:13:50 compute-1 systemd[1]: session-7.scope: Deactivated successfully.
Jan 31 06:13:50 compute-1 systemd[1]: session-7.scope: Consumed 2.060s CPU time.
Jan 31 06:13:50 compute-1 systemd-logind[788]: Session 7 logged out. Waiting for processes to exit.
Jan 31 06:13:50 compute-1 systemd-logind[788]: Removed session 7.
Jan 31 06:13:56 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 06:13:56 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 31 06:13:56 compute-1 systemd[1]: man-db-cache-update.service: Consumed 38.501s CPU time.
Jan 31 06:13:56 compute-1 systemd[1]: run-r55f14c9ae29c40cfa444ad564fe459cf.service: Deactivated successfully.
Jan 31 06:14:00 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 06:14:20 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 06:14:22 compute-1 sshd-session[30027]: Connection closed by 170.64.139.8 port 50120
Jan 31 06:14:38 compute-1 sshd-session[30028]: Connection closed by authenticating user root 170.64.139.8 port 52522 [preauth]
Jan 31 06:16:43 compute-1 sshd-session[30031]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 06:16:43 compute-1 sshd-session[30031]: Connection reset by 176.120.22.52 port 58524
Jan 31 06:19:50 compute-1 sshd-session[30036]: Received disconnect from 45.148.10.157 port 25244:11:  [preauth]
Jan 31 06:19:50 compute-1 sshd-session[30036]: Disconnected from authenticating user root 45.148.10.157 port 25244 [preauth]
Jan 31 06:20:34 compute-1 sshd-session[30038]: Accepted publickey for zuul from 38.102.83.142 port 58210 ssh2: RSA SHA256:aLVmJr8JWWHWS0llpJoB9Gsrlwh5Xj4FMq/6l3bp+3U
Jan 31 06:20:34 compute-1 systemd-logind[788]: New session 8 of user zuul.
Jan 31 06:20:34 compute-1 systemd[1]: Started Session 8 of User zuul.
Jan 31 06:20:34 compute-1 sshd-session[30038]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:20:34 compute-1 python3[30114]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:20:36 compute-1 sudo[30228]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omottgyqpjpbuvgmasbikkrkcvpqmutp ; /usr/bin/python3'
Jan 31 06:20:36 compute-1 sudo[30228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:20:36 compute-1 python3[30230]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:20:36 compute-1 sudo[30228]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:36 compute-1 sudo[30301]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuagulgthfbnhzhdvgugvyessimnolqn ; /usr/bin/python3'
Jan 31 06:20:36 compute-1 sudo[30301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:20:37 compute-1 python3[30303]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769840436.370135-34197-268501624345153/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:20:37 compute-1 sudo[30301]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:37 compute-1 sudo[30327]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyhghrlnucaudnhdgyessnzuvcafbmpj ; /usr/bin/python3'
Jan 31 06:20:37 compute-1 sudo[30327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:20:37 compute-1 python3[30329]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:20:37 compute-1 sudo[30327]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:37 compute-1 sudo[30400]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjclemziolfuzldxumtisupwlirrxwex ; /usr/bin/python3'
Jan 31 06:20:37 compute-1 sudo[30400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:20:37 compute-1 python3[30402]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769840436.370135-34197-268501624345153/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:20:37 compute-1 sudo[30400]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:37 compute-1 sudo[30426]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwyznhnvyugagowwpdvogtjfirbdftxf ; /usr/bin/python3'
Jan 31 06:20:37 compute-1 sudo[30426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:20:38 compute-1 python3[30428]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:20:38 compute-1 sudo[30426]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:38 compute-1 sudo[30499]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrmqiwmszadutpygeuofgdaobemzkrdl ; /usr/bin/python3'
Jan 31 06:20:38 compute-1 sudo[30499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:20:38 compute-1 python3[30501]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769840436.370135-34197-268501624345153/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:20:38 compute-1 sudo[30499]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:38 compute-1 sudo[30525]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsacgmuupewvcvoymmlwyprldoxccbkw ; /usr/bin/python3'
Jan 31 06:20:38 compute-1 sudo[30525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:20:38 compute-1 python3[30527]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:20:38 compute-1 sudo[30525]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:38 compute-1 sudo[30598]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmetzgndkarmwhqwlcipfvkaoxkiwjip ; /usr/bin/python3'
Jan 31 06:20:38 compute-1 sudo[30598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:20:39 compute-1 python3[30600]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769840436.370135-34197-268501624345153/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:20:39 compute-1 sudo[30598]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:39 compute-1 sudo[30624]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwlqeyzpfofmvltbvkfktspllavhxcbv ; /usr/bin/python3'
Jan 31 06:20:39 compute-1 sudo[30624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:20:39 compute-1 python3[30626]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:20:39 compute-1 sudo[30624]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:39 compute-1 sudo[30697]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvugpjjxggoyumjkwbfubzwmefbasxad ; /usr/bin/python3'
Jan 31 06:20:39 compute-1 sudo[30697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:20:39 compute-1 python3[30699]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769840436.370135-34197-268501624345153/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:20:39 compute-1 sudo[30697]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:39 compute-1 sudo[30723]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epxwlarxovcbjrhfduseuvyzwcdrmvnk ; /usr/bin/python3'
Jan 31 06:20:39 compute-1 sudo[30723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:20:39 compute-1 python3[30725]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:20:39 compute-1 sudo[30723]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:39 compute-1 sudo[30796]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfiwlutrrybkfxilhgnlwupnjwyejgcx ; /usr/bin/python3'
Jan 31 06:20:39 compute-1 sudo[30796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:20:40 compute-1 python3[30798]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769840436.370135-34197-268501624345153/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:20:40 compute-1 sudo[30796]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:40 compute-1 sudo[30822]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfootvzgcznmjuqtkefdifacjvuipwkg ; /usr/bin/python3'
Jan 31 06:20:40 compute-1 sudo[30822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:20:40 compute-1 python3[30824]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:20:40 compute-1 sudo[30822]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:40 compute-1 sudo[30895]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvhdsmicyrioojcpuxirldyqqsrmqxfw ; /usr/bin/python3'
Jan 31 06:20:40 compute-1 sudo[30895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:20:40 compute-1 python3[30897]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769840436.370135-34197-268501624345153/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:20:40 compute-1 sudo[30895]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:52 compute-1 python3[30945]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:25:52 compute-1 sshd-session[30041]: Received disconnect from 38.102.83.142 port 58210:11: disconnected by user
Jan 31 06:25:52 compute-1 sshd-session[30041]: Disconnected from user zuul 38.102.83.142 port 58210
Jan 31 06:25:52 compute-1 sshd-session[30038]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:25:52 compute-1 systemd[1]: session-8.scope: Deactivated successfully.
Jan 31 06:25:52 compute-1 systemd[1]: session-8.scope: Consumed 4.212s CPU time.
Jan 31 06:25:52 compute-1 systemd-logind[788]: Session 8 logged out. Waiting for processes to exit.
Jan 31 06:25:52 compute-1 systemd-logind[788]: Removed session 8.
Jan 31 06:26:08 compute-1 sshd-session[30950]: Received disconnect from 45.148.10.147 port 26320:11:  [preauth]
Jan 31 06:26:08 compute-1 sshd-session[30950]: Disconnected from authenticating user root 45.148.10.147 port 26320 [preauth]
Jan 31 06:30:01 compute-1 anacron[7266]: Job `cron.daily' started
Jan 31 06:30:01 compute-1 anacron[7266]: Job `cron.daily' terminated
Jan 31 06:32:40 compute-1 sshd-session[30956]: Received disconnect from 91.224.92.78 port 16090:11:  [preauth]
Jan 31 06:32:40 compute-1 sshd-session[30956]: Disconnected from authenticating user root 91.224.92.78 port 16090 [preauth]
Jan 31 06:35:05 compute-1 sshd-session[30959]: Connection closed by 2.57.122.238 port 56868
Jan 31 06:36:27 compute-1 sshd-session[30960]: Accepted publickey for zuul from 192.168.122.30 port 35054 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:36:27 compute-1 systemd-logind[788]: New session 9 of user zuul.
Jan 31 06:36:27 compute-1 systemd[1]: Started Session 9 of User zuul.
Jan 31 06:36:27 compute-1 sshd-session[30960]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:36:28 compute-1 python3.9[31113]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:36:30 compute-1 sudo[31292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfnbmbriivvonyvttyfdgsbxefgydbus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841390.102041-57-202289717364404/AnsiballZ_command.py'
Jan 31 06:36:30 compute-1 sudo[31292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:36:30 compute-1 python3.9[31294]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:36:43 compute-1 sudo[31292]: pam_unix(sudo:session): session closed for user root
Jan 31 06:36:44 compute-1 sshd-session[30963]: Connection closed by 192.168.122.30 port 35054
Jan 31 06:36:44 compute-1 sshd-session[30960]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:36:44 compute-1 systemd[1]: session-9.scope: Deactivated successfully.
Jan 31 06:36:44 compute-1 systemd[1]: session-9.scope: Consumed 7.568s CPU time.
Jan 31 06:36:44 compute-1 systemd-logind[788]: Session 9 logged out. Waiting for processes to exit.
Jan 31 06:36:44 compute-1 systemd-logind[788]: Removed session 9.
Jan 31 06:36:59 compute-1 sshd-session[31351]: Accepted publickey for zuul from 192.168.122.30 port 44140 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:36:59 compute-1 systemd-logind[788]: New session 10 of user zuul.
Jan 31 06:36:59 compute-1 systemd[1]: Started Session 10 of User zuul.
Jan 31 06:36:59 compute-1 sshd-session[31351]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:37:00 compute-1 python3.9[31504]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 06:37:01 compute-1 python3.9[31678]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:37:02 compute-1 sudo[31828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzqpucochxcsjuljyrpqtjbrcrxryanr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841422.4395175-94-262004459341224/AnsiballZ_command.py'
Jan 31 06:37:02 compute-1 sudo[31828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:37:03 compute-1 python3.9[31830]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:37:03 compute-1 sudo[31828]: pam_unix(sudo:session): session closed for user root
Jan 31 06:37:03 compute-1 sudo[31981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oylytthloovnsjhrnbjwkzlgwpgmmbpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841423.5309012-130-258428843832231/AnsiballZ_stat.py'
Jan 31 06:37:03 compute-1 sudo[31981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:37:04 compute-1 python3.9[31983]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:37:04 compute-1 sudo[31981]: pam_unix(sudo:session): session closed for user root
Jan 31 06:37:04 compute-1 sudo[32133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeazmztstzfemlvqmckdnhgoqqblidrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841424.3731513-154-184954696321269/AnsiballZ_file.py'
Jan 31 06:37:04 compute-1 sudo[32133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:37:04 compute-1 python3.9[32135]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:37:04 compute-1 sudo[32133]: pam_unix(sudo:session): session closed for user root
Jan 31 06:37:05 compute-1 sudo[32285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdceonjvnugkhcuvxfwxenxvjdqcfttn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841425.1723695-178-253332813578061/AnsiballZ_stat.py'
Jan 31 06:37:05 compute-1 sudo[32285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:37:05 compute-1 python3.9[32287]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:37:05 compute-1 sudo[32285]: pam_unix(sudo:session): session closed for user root
Jan 31 06:37:06 compute-1 sudo[32408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmiwsvndjlcptkohpuovzirkbywtbxml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841425.1723695-178-253332813578061/AnsiballZ_copy.py'
Jan 31 06:37:06 compute-1 sudo[32408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:37:06 compute-1 python3.9[32410]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841425.1723695-178-253332813578061/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:37:06 compute-1 sudo[32408]: pam_unix(sudo:session): session closed for user root
Jan 31 06:37:06 compute-1 sudo[32560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyiyovytrrzsjfujycxnkjanvooxgepj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841426.5195959-223-10109666817267/AnsiballZ_setup.py'
Jan 31 06:37:06 compute-1 sudo[32560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:37:07 compute-1 python3.9[32562]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:37:07 compute-1 sudo[32560]: pam_unix(sudo:session): session closed for user root
Jan 31 06:37:07 compute-1 sudo[32716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shapgigmjwqajtvxlwojkovbwgwyvbtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841427.3922184-247-81069710577939/AnsiballZ_file.py'
Jan 31 06:37:07 compute-1 sudo[32716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:37:07 compute-1 python3.9[32718]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:37:08 compute-1 sudo[32716]: pam_unix(sudo:session): session closed for user root
Jan 31 06:37:08 compute-1 sudo[32868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddumkjtkkghomzwyukvvvyydkutwjwbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841428.310592-274-279589229606520/AnsiballZ_file.py'
Jan 31 06:37:08 compute-1 sudo[32868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:37:08 compute-1 python3.9[32870]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:37:08 compute-1 sudo[32868]: pam_unix(sudo:session): session closed for user root
Jan 31 06:37:09 compute-1 python3.9[33020]: ansible-ansible.builtin.service_facts Invoked
Jan 31 06:37:12 compute-1 python3.9[33273]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:37:13 compute-1 python3.9[33423]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:37:14 compute-1 python3.9[33577]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:37:15 compute-1 sudo[33733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euldapwjcgfjyxzrjthqmnesrjvhsdzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841435.0413225-418-59904121327060/AnsiballZ_setup.py'
Jan 31 06:37:15 compute-1 sudo[33733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:37:15 compute-1 python3.9[33735]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:37:15 compute-1 sudo[33733]: pam_unix(sudo:session): session closed for user root
Jan 31 06:37:16 compute-1 sudo[33817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvfzvlghhksxuuzzzkmtqcoqufswpfwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841435.0413225-418-59904121327060/AnsiballZ_dnf.py'
Jan 31 06:37:16 compute-1 sudo[33817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:37:16 compute-1 python3.9[33819]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:38:26 compute-1 sshd-session[33970]: Invalid user sol from 2.57.122.238 port 39580
Jan 31 06:38:26 compute-1 sshd-session[33970]: Connection closed by invalid user sol 2.57.122.238 port 39580 [preauth]
Jan 31 06:38:29 compute-1 systemd[1]: Reloading.
Jan 31 06:38:29 compute-1 systemd-rc-local-generator[34014]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:38:29 compute-1 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 31 06:38:31 compute-1 systemd[1]: Reloading.
Jan 31 06:38:31 compute-1 systemd-rc-local-generator[34060]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:38:31 compute-1 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 31 06:38:31 compute-1 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 31 06:38:31 compute-1 systemd[1]: Reloading.
Jan 31 06:38:31 compute-1 systemd-rc-local-generator[34102]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:38:31 compute-1 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 31 06:38:31 compute-1 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Jan 31 06:38:31 compute-1 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Jan 31 06:38:31 compute-1 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Jan 31 06:38:44 compute-1 sshd-session[34149]: Received disconnect from 91.224.92.78 port 51545:11:  [preauth]
Jan 31 06:38:44 compute-1 sshd-session[34149]: Disconnected from authenticating user root 91.224.92.78 port 51545 [preauth]
Jan 31 06:39:59 compute-1 kernel: SELinux:  Converting 2727 SID table entries...
Jan 31 06:39:59 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 06:39:59 compute-1 kernel: SELinux:  policy capability open_perms=1
Jan 31 06:39:59 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 06:39:59 compute-1 kernel: SELinux:  policy capability always_check_network=0
Jan 31 06:39:59 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 06:39:59 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 06:39:59 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 06:40:00 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 31 06:40:00 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 06:40:00 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 31 06:40:00 compute-1 systemd[1]: Reloading.
Jan 31 06:40:00 compute-1 systemd-rc-local-generator[34421]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:40:00 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 06:40:02 compute-1 sudo[33817]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:03 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 06:40:03 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 31 06:40:03 compute-1 systemd[1]: run-r796be34265a2422e96f7b191ec41f2ba.service: Deactivated successfully.
Jan 31 06:40:03 compute-1 sudo[35338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bilivoebhnvplfjneiutzypxchkczlzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841603.3075614-455-123702289666866/AnsiballZ_command.py'
Jan 31 06:40:03 compute-1 sudo[35338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:03 compute-1 python3.9[35340]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:40:05 compute-1 sudo[35338]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:06 compute-1 sudo[35619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlmxpvgpzasjdvuwkkztpxawcrxuhcsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841605.542936-478-150229855008843/AnsiballZ_selinux.py'
Jan 31 06:40:06 compute-1 sudo[35619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:06 compute-1 python3.9[35621]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 06:40:06 compute-1 sudo[35619]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:07 compute-1 sudo[35771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ittrrjugxscfdnwnqpybdbjbvmaxidhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841606.8094614-511-97353198592960/AnsiballZ_command.py'
Jan 31 06:40:07 compute-1 sudo[35771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:07 compute-1 python3.9[35773]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 06:40:08 compute-1 sudo[35771]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:09 compute-1 sudo[35925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxgddowvxdhoktplpfvpsstgpatrvlfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841608.8178306-535-50715635438568/AnsiballZ_file.py'
Jan 31 06:40:09 compute-1 sudo[35925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:09 compute-1 python3.9[35927]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:40:09 compute-1 sudo[35925]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:10 compute-1 sudo[36077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzmixglqztttsvlofoubhphtzpnocdct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841610.2386816-559-186704583123129/AnsiballZ_mount.py'
Jan 31 06:40:10 compute-1 sudo[36077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:11 compute-1 python3.9[36079]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 06:40:11 compute-1 sudo[36077]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:13 compute-1 sudo[36229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfafmmjlbuhmcbtmawkalfbnwecxdaho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841612.935133-643-247188049565351/AnsiballZ_file.py'
Jan 31 06:40:13 compute-1 sudo[36229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:13 compute-1 python3.9[36231]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:40:13 compute-1 sudo[36229]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:17 compute-1 sudo[36382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcmskauiewhxetrfguoctkdbnmkmpnbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841616.665306-667-252503112442176/AnsiballZ_stat.py'
Jan 31 06:40:17 compute-1 sudo[36382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:18 compute-1 python3.9[36384]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:40:18 compute-1 sudo[36382]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:18 compute-1 sudo[36505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajphpgqtgwqficnxsdtlsfbtlpnxlwfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841616.665306-667-252503112442176/AnsiballZ_copy.py'
Jan 31 06:40:18 compute-1 sudo[36505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:19 compute-1 python3.9[36507]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841616.665306-667-252503112442176/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5c0903ce7d45a242e5d722311138f253d8bd3b6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:40:19 compute-1 sudo[36505]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:20 compute-1 sudo[36657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuqkendhmxgmjfkrhkwmoeysrhkyxjtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841620.700772-739-105889674578010/AnsiballZ_stat.py'
Jan 31 06:40:20 compute-1 sudo[36657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:21 compute-1 python3.9[36659]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:40:21 compute-1 sudo[36657]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:22 compute-1 sudo[36809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjjgftylkmzhkakawncbwnqflhlymiyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841622.002652-763-145477723021250/AnsiballZ_command.py'
Jan 31 06:40:22 compute-1 sudo[36809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:22 compute-1 python3.9[36811]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:40:22 compute-1 sudo[36809]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:23 compute-1 sudo[36962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xljhvrfdysnfrznrdtodbxjfruigzgab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841623.4678314-787-173566754290557/AnsiballZ_file.py'
Jan 31 06:40:23 compute-1 sudo[36962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:23 compute-1 python3.9[36964]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:40:23 compute-1 sudo[36962]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:24 compute-1 sudo[37114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjnedhjivvbobvynyvqkpkcoczqwhkmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841624.4873152-820-141702742189780/AnsiballZ_getent.py'
Jan 31 06:40:24 compute-1 sudo[37114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:25 compute-1 python3.9[37116]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 06:40:25 compute-1 sudo[37114]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:25 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 06:40:25 compute-1 sudo[37268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czvutfsvvjbwgcmbjyjfdyvnszemeukz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841625.3959863-844-3762781443838/AnsiballZ_group.py'
Jan 31 06:40:25 compute-1 sudo[37268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:26 compute-1 python3.9[37270]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 06:40:26 compute-1 groupadd[37271]: group added to /etc/group: name=qemu, GID=107
Jan 31 06:40:26 compute-1 groupadd[37271]: group added to /etc/gshadow: name=qemu
Jan 31 06:40:26 compute-1 groupadd[37271]: new group: name=qemu, GID=107
Jan 31 06:40:26 compute-1 sudo[37268]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:27 compute-1 sudo[37426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grzsmtlcskdmmfaypvsaopwqjfpggydx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841626.7255085-868-31439524088614/AnsiballZ_user.py'
Jan 31 06:40:27 compute-1 sudo[37426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:27 compute-1 python3.9[37428]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 06:40:27 compute-1 useradd[37430]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 06:40:27 compute-1 sudo[37426]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:28 compute-1 sudo[37586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrqtnpnmpuiosqsklgokigaeuarlnsvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841628.0101278-892-164789360740677/AnsiballZ_getent.py'
Jan 31 06:40:28 compute-1 sudo[37586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:28 compute-1 python3.9[37588]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 06:40:28 compute-1 sudo[37586]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:28 compute-1 sudo[37739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxcwlqrlkbeqknvjxriniaecgxpupchb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841628.7127388-916-83904339287616/AnsiballZ_group.py'
Jan 31 06:40:28 compute-1 sudo[37739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:29 compute-1 python3.9[37741]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 06:40:29 compute-1 groupadd[37742]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 31 06:40:29 compute-1 groupadd[37742]: group added to /etc/gshadow: name=hugetlbfs
Jan 31 06:40:29 compute-1 groupadd[37742]: new group: name=hugetlbfs, GID=42477
Jan 31 06:40:29 compute-1 sudo[37739]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:29 compute-1 sudo[37897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvejqjyzfshlbkcxnfjzgevqujqgpehz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841629.6675723-943-281468007548351/AnsiballZ_file.py'
Jan 31 06:40:29 compute-1 sudo[37897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:30 compute-1 python3.9[37899]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 06:40:30 compute-1 sudo[37897]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:30 compute-1 sudo[38049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glvfqberzknhmfddhqhlphiyprfspcck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841630.7506807-976-124636549668881/AnsiballZ_dnf.py'
Jan 31 06:40:30 compute-1 sudo[38049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:31 compute-1 python3.9[38051]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:40:33 compute-1 sudo[38049]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:33 compute-1 sudo[38203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyvvoytrqskisyfmdkzkocsddbsysqhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841633.3435013-1000-222419150250159/AnsiballZ_file.py'
Jan 31 06:40:33 compute-1 sudo[38203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:33 compute-1 python3.9[38205]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:40:33 compute-1 sudo[38203]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:34 compute-1 sudo[38355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuindhamxsnvqqeyjytxoarmiywnrwet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841633.927598-1024-49123304505514/AnsiballZ_stat.py'
Jan 31 06:40:34 compute-1 sudo[38355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:34 compute-1 python3.9[38357]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:40:34 compute-1 sudo[38355]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:34 compute-1 sudo[38478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udrwfdgqozgazyxneqzwyjfkpafsywyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841633.927598-1024-49123304505514/AnsiballZ_copy.py'
Jan 31 06:40:34 compute-1 sudo[38478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:34 compute-1 python3.9[38480]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769841633.927598-1024-49123304505514/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:40:34 compute-1 sudo[38478]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:35 compute-1 sudo[38630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnbrezsaqxiwjxdzlehzpzyifjnsudwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841635.1038227-1069-78923246854814/AnsiballZ_systemd.py'
Jan 31 06:40:35 compute-1 sudo[38630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:35 compute-1 python3.9[38632]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:40:36 compute-1 systemd[1]: Starting Load Kernel Modules...
Jan 31 06:40:36 compute-1 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 31 06:40:36 compute-1 kernel: Bridge firewalling registered
Jan 31 06:40:36 compute-1 systemd-modules-load[38636]: Inserted module 'br_netfilter'
Jan 31 06:40:36 compute-1 systemd[1]: Finished Load Kernel Modules.
Jan 31 06:40:36 compute-1 sudo[38630]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:36 compute-1 sudo[38791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdmswshysupkjaqormieadytbqrfnzln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841636.3261783-1094-10474310861755/AnsiballZ_stat.py'
Jan 31 06:40:36 compute-1 sudo[38791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:36 compute-1 python3.9[38793]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:40:36 compute-1 sudo[38791]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:37 compute-1 sudo[38914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhroclnrleoyzxlpeywmophehwqxorex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841636.3261783-1094-10474310861755/AnsiballZ_copy.py'
Jan 31 06:40:37 compute-1 sudo[38914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:37 compute-1 python3.9[38916]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769841636.3261783-1094-10474310861755/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:40:37 compute-1 sudo[38914]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:38 compute-1 sudo[39066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvkdkyrwisdwodnzlnzzypaympgmkdrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841637.8216383-1147-42027045880054/AnsiballZ_dnf.py'
Jan 31 06:40:38 compute-1 sudo[39066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:38 compute-1 python3.9[39068]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:40:41 compute-1 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Jan 31 06:40:41 compute-1 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Jan 31 06:40:42 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 06:40:42 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 31 06:40:42 compute-1 systemd[1]: Reloading.
Jan 31 06:40:42 compute-1 systemd-rc-local-generator[39131]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:40:42 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 06:40:43 compute-1 sudo[39066]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:44 compute-1 python3.9[41456]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:40:45 compute-1 python3.9[42892]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 06:40:45 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 06:40:45 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 31 06:40:45 compute-1 systemd[1]: man-db-cache-update.service: Consumed 3.351s CPU time.
Jan 31 06:40:45 compute-1 systemd[1]: run-r14ab6edfa84549c6b3b8556de56ad48c.service: Deactivated successfully.
Jan 31 06:40:45 compute-1 python3.9[43139]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:40:46 compute-1 sudo[43290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btsmfvaizcurhsfvuhwzbufxpjmyxfsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841646.3216476-1264-160691994368893/AnsiballZ_command.py'
Jan 31 06:40:46 compute-1 sudo[43290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:46 compute-1 python3.9[43292]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:40:46 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 06:40:47 compute-1 systemd[1]: Starting Authorization Manager...
Jan 31 06:40:47 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 06:40:47 compute-1 polkitd[43509]: Started polkitd version 0.117
Jan 31 06:40:47 compute-1 polkitd[43509]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 06:40:47 compute-1 polkitd[43509]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 06:40:47 compute-1 polkitd[43509]: Finished loading, compiling and executing 2 rules
Jan 31 06:40:47 compute-1 polkitd[43509]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 31 06:40:47 compute-1 systemd[1]: Started Authorization Manager.
Jan 31 06:40:47 compute-1 sudo[43290]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:47 compute-1 sudo[43677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njkevoodirereeqrewikzbosdmmbbcsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841647.6050885-1291-215534972894330/AnsiballZ_systemd.py'
Jan 31 06:40:47 compute-1 sudo[43677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:48 compute-1 python3.9[43679]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:40:48 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 06:40:48 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 06:40:48 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 06:40:48 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 06:40:48 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 06:40:48 compute-1 sudo[43677]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:50 compute-1 python3.9[43841]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 06:40:53 compute-1 sudo[43991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjuttidzktittjkmnvtqqwskpwviknpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841653.0572631-1462-91971541522637/AnsiballZ_systemd.py'
Jan 31 06:40:53 compute-1 sudo[43991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:53 compute-1 python3.9[43993]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:40:53 compute-1 systemd[1]: Reloading.
Jan 31 06:40:53 compute-1 systemd-rc-local-generator[44016]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:40:54 compute-1 sudo[43991]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:54 compute-1 sudo[44180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eetxqhwymqqveuqugzwiuddczhdlzcdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841654.193434-1462-36394502039654/AnsiballZ_systemd.py'
Jan 31 06:40:54 compute-1 sudo[44180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:54 compute-1 python3.9[44182]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:40:54 compute-1 systemd[1]: Reloading.
Jan 31 06:40:54 compute-1 systemd-rc-local-generator[44206]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:40:54 compute-1 sudo[44180]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:55 compute-1 sudo[44369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znfimmpwtiwdamupgwkjpavlnjjoiwfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841655.210923-1510-157058205985856/AnsiballZ_command.py'
Jan 31 06:40:55 compute-1 sudo[44369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:55 compute-1 python3.9[44371]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:40:55 compute-1 sudo[44369]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:56 compute-1 sudo[44522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xedkoyidaejzbisqkewtqtdagzzqlwdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841655.9029016-1534-227913924293006/AnsiballZ_command.py'
Jan 31 06:40:56 compute-1 sudo[44522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:56 compute-1 python3.9[44524]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:40:56 compute-1 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 31 06:40:56 compute-1 sudo[44522]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:56 compute-1 sudo[44675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqcuwowdnihrinzafaerchddtewxzdjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841656.5978522-1558-212504766523571/AnsiballZ_command.py'
Jan 31 06:40:56 compute-1 sudo[44675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:56 compute-1 python3.9[44677]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:40:58 compute-1 sudo[44675]: pam_unix(sudo:session): session closed for user root
Jan 31 06:40:58 compute-1 sudo[44837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bapbovhqvdpboftxhhgjzmdiwkbvcnhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841658.7548208-1582-110787494024473/AnsiballZ_command.py'
Jan 31 06:40:58 compute-1 sudo[44837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:40:59 compute-1 python3.9[44839]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:40:59 compute-1 sudo[44837]: pam_unix(sudo:session): session closed for user root
Jan 31 06:41:00 compute-1 sudo[44990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izlcpymwtunzswtxmgzzjfrerknpnbto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841660.4776855-1606-210412833457078/AnsiballZ_systemd.py'
Jan 31 06:41:00 compute-1 sudo[44990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:41:01 compute-1 python3.9[44992]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:41:01 compute-1 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 06:41:01 compute-1 systemd[1]: Stopped Apply Kernel Variables.
Jan 31 06:41:01 compute-1 systemd[1]: Stopping Apply Kernel Variables...
Jan 31 06:41:01 compute-1 systemd[1]: Starting Apply Kernel Variables...
Jan 31 06:41:01 compute-1 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 06:41:01 compute-1 systemd[1]: Finished Apply Kernel Variables.
Jan 31 06:41:01 compute-1 sudo[44990]: pam_unix(sudo:session): session closed for user root
Jan 31 06:41:02 compute-1 sshd-session[31354]: Connection closed by 192.168.122.30 port 44140
Jan 31 06:41:02 compute-1 sshd-session[31351]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:41:02 compute-1 systemd[1]: session-10.scope: Deactivated successfully.
Jan 31 06:41:02 compute-1 systemd[1]: session-10.scope: Consumed 2min 13.402s CPU time.
Jan 31 06:41:02 compute-1 systemd-logind[788]: Session 10 logged out. Waiting for processes to exit.
Jan 31 06:41:02 compute-1 systemd-logind[788]: Removed session 10.
Jan 31 06:41:09 compute-1 sshd-session[45022]: Accepted publickey for zuul from 192.168.122.30 port 54002 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:41:09 compute-1 systemd-logind[788]: New session 11 of user zuul.
Jan 31 06:41:09 compute-1 systemd[1]: Started Session 11 of User zuul.
Jan 31 06:41:09 compute-1 sshd-session[45022]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:41:10 compute-1 python3.9[45175]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:41:11 compute-1 sudo[45329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwuutcwzsofejcvtygrzxllkttiyxdbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841671.3187354-68-122638329334185/AnsiballZ_getent.py'
Jan 31 06:41:11 compute-1 sudo[45329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:41:11 compute-1 python3.9[45331]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 06:41:11 compute-1 sudo[45329]: pam_unix(sudo:session): session closed for user root
Jan 31 06:41:12 compute-1 sudo[45482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrpswvdfwnnddhozqmqrwmpeimpdpovs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841672.0938041-92-223116375408831/AnsiballZ_group.py'
Jan 31 06:41:12 compute-1 sudo[45482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:41:12 compute-1 python3.9[45484]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 06:41:12 compute-1 groupadd[45485]: group added to /etc/group: name=openvswitch, GID=42476
Jan 31 06:41:12 compute-1 groupadd[45485]: group added to /etc/gshadow: name=openvswitch
Jan 31 06:41:12 compute-1 groupadd[45485]: new group: name=openvswitch, GID=42476
Jan 31 06:41:12 compute-1 sudo[45482]: pam_unix(sudo:session): session closed for user root
Jan 31 06:41:13 compute-1 sudo[45640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgifkipavoptgwysxvxwswmwxtwiiwyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841673.0528805-116-267057811751870/AnsiballZ_user.py'
Jan 31 06:41:13 compute-1 sudo[45640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:41:13 compute-1 python3.9[45642]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 06:41:13 compute-1 useradd[45644]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 06:41:14 compute-1 useradd[45644]: add 'openvswitch' to group 'hugetlbfs'
Jan 31 06:41:14 compute-1 useradd[45644]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 31 06:41:14 compute-1 sudo[45640]: pam_unix(sudo:session): session closed for user root
Jan 31 06:41:14 compute-1 sudo[45800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urbnvrtscdjrlzjfqgsumwddakmuhdaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841674.5257978-146-275165474294723/AnsiballZ_setup.py'
Jan 31 06:41:14 compute-1 sudo[45800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:41:15 compute-1 python3.9[45802]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:41:15 compute-1 sudo[45800]: pam_unix(sudo:session): session closed for user root
Jan 31 06:41:15 compute-1 sudo[45884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yotummbomvyloecqtivlotxgetrxphad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841674.5257978-146-275165474294723/AnsiballZ_dnf.py'
Jan 31 06:41:15 compute-1 sudo[45884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:41:16 compute-1 python3.9[45886]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 06:41:18 compute-1 sudo[45884]: pam_unix(sudo:session): session closed for user root
Jan 31 06:41:19 compute-1 sudo[46047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbpdcutdvhjmtwxchepftwmfvydhggqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841678.879098-188-191044827599975/AnsiballZ_dnf.py'
Jan 31 06:41:19 compute-1 sudo[46047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:41:19 compute-1 python3.9[46049]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:41:37 compute-1 kernel: SELinux:  Converting 2739 SID table entries...
Jan 31 06:41:37 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 06:41:37 compute-1 kernel: SELinux:  policy capability open_perms=1
Jan 31 06:41:37 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 06:41:37 compute-1 kernel: SELinux:  policy capability always_check_network=0
Jan 31 06:41:37 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 06:41:37 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 06:41:37 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 06:41:38 compute-1 groupadd[46072]: group added to /etc/group: name=unbound, GID=994
Jan 31 06:41:38 compute-1 groupadd[46072]: group added to /etc/gshadow: name=unbound
Jan 31 06:41:38 compute-1 groupadd[46072]: new group: name=unbound, GID=994
Jan 31 06:41:38 compute-1 useradd[46079]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 31 06:41:39 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 31 06:41:39 compute-1 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 31 06:41:41 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 06:41:41 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 31 06:41:41 compute-1 systemd[1]: Reloading.
Jan 31 06:41:41 compute-1 systemd-rc-local-generator[46577]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:41:41 compute-1 systemd-sysv-generator[46581]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:41:41 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 06:41:43 compute-1 sudo[46047]: pam_unix(sudo:session): session closed for user root
Jan 31 06:41:44 compute-1 sudo[47145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kacaahwuoilessqszpvyqulrtpgnjser ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841703.6403043-212-86074883949815/AnsiballZ_systemd.py'
Jan 31 06:41:44 compute-1 sudo[47145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:41:44 compute-1 python3.9[47147]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 06:41:44 compute-1 systemd[1]: Reloading.
Jan 31 06:41:44 compute-1 systemd-rc-local-generator[47176]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:41:44 compute-1 systemd-sysv-generator[47181]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:41:44 compute-1 systemd[1]: Starting Open vSwitch Database Unit...
Jan 31 06:41:44 compute-1 chown[47189]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 31 06:41:44 compute-1 ovs-ctl[47194]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 31 06:41:45 compute-1 ovs-ctl[47194]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 31 06:41:45 compute-1 ovs-ctl[47194]: Starting ovsdb-server [  OK  ]
Jan 31 06:41:45 compute-1 ovs-vsctl[47243]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 31 06:41:45 compute-1 ovs-vsctl[47263]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"3f1b6d5d-330e-4693-ab86-ea25a99a46d7\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 31 06:41:45 compute-1 ovs-ctl[47194]: Configuring Open vSwitch system IDs [  OK  ]
Jan 31 06:41:45 compute-1 ovs-ctl[47194]: Enabling remote OVSDB managers [  OK  ]
Jan 31 06:41:45 compute-1 ovs-vsctl[47269]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Jan 31 06:41:45 compute-1 systemd[1]: Started Open vSwitch Database Unit.
Jan 31 06:41:45 compute-1 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 31 06:41:45 compute-1 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 31 06:41:45 compute-1 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 31 06:41:45 compute-1 kernel: openvswitch: Open vSwitch switching datapath
Jan 31 06:41:45 compute-1 ovs-ctl[47313]: Inserting openvswitch module [  OK  ]
Jan 31 06:41:45 compute-1 ovs-ctl[47282]: Starting ovs-vswitchd [  OK  ]
Jan 31 06:41:45 compute-1 ovs-vsctl[47331]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Jan 31 06:41:45 compute-1 ovs-ctl[47282]: Enabling remote OVSDB managers [  OK  ]
Jan 31 06:41:45 compute-1 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 31 06:41:45 compute-1 systemd[1]: Starting Open vSwitch...
Jan 31 06:41:45 compute-1 systemd[1]: Finished Open vSwitch.
Jan 31 06:41:45 compute-1 sudo[47145]: pam_unix(sudo:session): session closed for user root
Jan 31 06:41:46 compute-1 python3.9[47482]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:41:46 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 06:41:46 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 31 06:41:46 compute-1 systemd[1]: run-r46f70fc3a9494586a9b3d5cbbf7ee6c0.service: Deactivated successfully.
Jan 31 06:41:47 compute-1 sudo[47633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vldggsgtnwarufljtovzorgfotjdavke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841706.6926086-266-215203006792462/AnsiballZ_sefcontext.py'
Jan 31 06:41:47 compute-1 sudo[47633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:41:47 compute-1 python3.9[47635]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 06:41:49 compute-1 kernel: SELinux:  Converting 2753 SID table entries...
Jan 31 06:41:49 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 06:41:49 compute-1 kernel: SELinux:  policy capability open_perms=1
Jan 31 06:41:49 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 06:41:49 compute-1 kernel: SELinux:  policy capability always_check_network=0
Jan 31 06:41:49 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 06:41:49 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 06:41:49 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 06:41:49 compute-1 sudo[47633]: pam_unix(sudo:session): session closed for user root
Jan 31 06:41:50 compute-1 python3.9[47790]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:41:51 compute-1 sudo[47946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzqbrrkwkrelofeoqpncbjsoklxpxxzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841710.813814-320-197095269467621/AnsiballZ_dnf.py'
Jan 31 06:41:51 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 31 06:41:51 compute-1 sudo[47946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:41:51 compute-1 python3.9[47948]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:41:52 compute-1 sudo[47946]: pam_unix(sudo:session): session closed for user root
Jan 31 06:41:53 compute-1 sshd-session[47950]: Invalid user solana from 2.57.122.238 port 47828
Jan 31 06:41:53 compute-1 sshd-session[47950]: Connection closed by invalid user solana 2.57.122.238 port 47828 [preauth]
Jan 31 06:41:53 compute-1 sudo[48101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drbhfpgcwhusbwqxngbwwivmumygdghz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841712.992813-344-152979018161029/AnsiballZ_command.py'
Jan 31 06:41:53 compute-1 sudo[48101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:41:53 compute-1 python3.9[48103]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:41:54 compute-1 sudo[48101]: pam_unix(sudo:session): session closed for user root
Jan 31 06:41:54 compute-1 sudo[48388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbvykskppxlluugdmynecyksywcyhror ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841714.3207052-368-206788410368296/AnsiballZ_file.py'
Jan 31 06:41:54 compute-1 sudo[48388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:41:54 compute-1 python3.9[48390]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 06:41:54 compute-1 sudo[48388]: pam_unix(sudo:session): session closed for user root
Jan 31 06:41:55 compute-1 python3.9[48540]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:41:56 compute-1 sudo[48692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrljcdfesmhkudjtjdszdorboupojzzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841715.8318172-416-181726769687170/AnsiballZ_dnf.py'
Jan 31 06:41:56 compute-1 sudo[48692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:41:56 compute-1 python3.9[48694]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:42:00 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 06:42:00 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 31 06:42:00 compute-1 systemd[1]: Reloading.
Jan 31 06:42:00 compute-1 systemd-rc-local-generator[48731]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:42:00 compute-1 systemd-sysv-generator[48736]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:42:00 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 06:42:04 compute-1 sudo[48692]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:04 compute-1 sudo[49008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqjjypzypggnhyndgtxgxpjgllmjbxjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841724.3230844-440-243124526571089/AnsiballZ_systemd.py'
Jan 31 06:42:04 compute-1 sudo[49008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:04 compute-1 python3.9[49010]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:42:04 compute-1 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 06:42:04 compute-1 systemd[1]: Stopped Network Manager Wait Online.
Jan 31 06:42:04 compute-1 systemd[1]: Stopping Network Manager Wait Online...
Jan 31 06:42:04 compute-1 systemd[1]: Stopping Network Manager...
Jan 31 06:42:04 compute-1 NetworkManager[7198]: <info>  [1769841724.9773] caught SIGTERM, shutting down normally.
Jan 31 06:42:04 compute-1 NetworkManager[7198]: <info>  [1769841724.9786] dhcp4 (eth0): canceled DHCP transaction
Jan 31 06:42:04 compute-1 NetworkManager[7198]: <info>  [1769841724.9786] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 06:42:04 compute-1 NetworkManager[7198]: <info>  [1769841724.9786] dhcp4 (eth0): state changed no lease
Jan 31 06:42:04 compute-1 NetworkManager[7198]: <info>  [1769841724.9790] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 06:42:05 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 06:42:05 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 06:42:05 compute-1 NetworkManager[7198]: <info>  [1769841725.2201] exiting (success)
Jan 31 06:42:05 compute-1 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 06:42:05 compute-1 systemd[1]: Stopped Network Manager.
Jan 31 06:42:05 compute-1 systemd[1]: NetworkManager.service: Consumed 18.498s CPU time, 4.3M memory peak, read 0B from disk, written 29.5K to disk.
Jan 31 06:42:05 compute-1 systemd[1]: Starting Network Manager...
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.2700] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:0f1e2a66-84e3-44e9-8fdb-17905db9d508)
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.2701] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.2739] manager[0x55ded8f35000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 06:42:05 compute-1 systemd[1]: Starting Hostname Service...
Jan 31 06:42:05 compute-1 systemd[1]: Started Hostname Service.
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3404] hostname: hostname: using hostnamed
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3405] hostname: static hostname changed from (none) to "compute-1"
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3409] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3412] manager[0x55ded8f35000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3412] manager[0x55ded8f35000]: rfkill: WWAN hardware radio set enabled
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3429] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3436] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3436] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3437] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3437] manager: Networking is enabled by state file
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3439] settings: Loaded settings plugin: keyfile (internal)
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3442] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3460] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3468] dhcp: init: Using DHCP client 'internal'
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3470] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3474] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3479] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3484] device (lo): Activation: starting connection 'lo' (12701cd6-486a-4822-b29d-5b142e7d4428)
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3490] device (eth0): carrier: link connected
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3492] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3496] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3497] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3502] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3507] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3512] device (eth1): carrier: link connected
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3514] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3518] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (5b3e436a-1bd9-50ca-b46e-b28c26ac73b3) (indicated)
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3519] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3523] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3529] device (eth1): Activation: starting connection 'ci-private-network' (5b3e436a-1bd9-50ca-b46e-b28c26ac73b3)
Jan 31 06:42:05 compute-1 systemd[1]: Started Network Manager.
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3533] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3539] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3541] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3543] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3545] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3548] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3550] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3558] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3561] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3568] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3571] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3576] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3587] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3605] dhcp4 (eth0): state changed new lease, address=38.102.83.128
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.3610] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 06:42:05 compute-1 systemd[1]: Starting Network Manager Wait Online...
Jan 31 06:42:05 compute-1 sudo[49008]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.8058] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.8066] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.8067] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.8069] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.8073] device (lo): Activation: successful, device activated.
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.8081] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.8087] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.8091] device (eth1): Activation: successful, device activated.
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.8128] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.8129] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.8134] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.8137] device (eth0): Activation: successful, device activated.
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.8143] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 06:42:05 compute-1 NetworkManager[49028]: <info>  [1769841725.8146] manager: startup complete
Jan 31 06:42:05 compute-1 systemd[1]: Finished Network Manager Wait Online.
Jan 31 06:42:06 compute-1 sudo[49236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdeipkxqfcdymfmvursdlhtvshjhiqva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841725.8667505-464-45016762386493/AnsiballZ_dnf.py'
Jan 31 06:42:06 compute-1 sudo[49236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:06 compute-1 python3.9[49238]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:42:13 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 06:42:13 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 31 06:42:13 compute-1 systemd[1]: run-r428f3280adbf4eabb71a4b81f26580d8.service: Deactivated successfully.
Jan 31 06:42:15 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 06:42:23 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 06:42:23 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 31 06:42:23 compute-1 systemd[1]: Reloading.
Jan 31 06:42:23 compute-1 systemd-rc-local-generator[49285]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:42:23 compute-1 systemd-sysv-generator[49289]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:42:24 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 06:42:29 compute-1 sudo[49236]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:29 compute-1 sudo[49696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snmhcrwmeyhcpxixkmkelhwhzynposlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841749.3641891-500-235711574039117/AnsiballZ_stat.py'
Jan 31 06:42:29 compute-1 sudo[49696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:29 compute-1 python3.9[49698]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:42:29 compute-1 sudo[49696]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:30 compute-1 sudo[49848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trhnvaihfdekcwcmshpblysywvdhhveg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841749.9977162-527-207093339971742/AnsiballZ_ini_file.py'
Jan 31 06:42:30 compute-1 sudo[49848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:30 compute-1 python3.9[49850]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:42:30 compute-1 sudo[49848]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:31 compute-1 sudo[50002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pppakgfyktqfwmffgnaihdxpdafezkjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841750.9895642-557-115321596687085/AnsiballZ_ini_file.py'
Jan 31 06:42:31 compute-1 sudo[50002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:31 compute-1 python3.9[50004]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:42:31 compute-1 sudo[50002]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:31 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 06:42:31 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 31 06:42:31 compute-1 systemd[1]: run-r6216f3ea09eb4c489baac5642e174c52.service: Deactivated successfully.
Jan 31 06:42:31 compute-1 sudo[50155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihggcqrklvacxepaujsjorotiflsbqqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841751.6229217-557-118148928780442/AnsiballZ_ini_file.py'
Jan 31 06:42:31 compute-1 sudo[50155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:32 compute-1 python3.9[50157]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:42:32 compute-1 sudo[50155]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:32 compute-1 sudo[50307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgienfttjlsvtgrjljurfygkcvndtbvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841752.449725-602-60826093946378/AnsiballZ_ini_file.py'
Jan 31 06:42:32 compute-1 sudo[50307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:33 compute-1 python3.9[50309]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:42:33 compute-1 sudo[50307]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:33 compute-1 sudo[50459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhydtyphltltaidnhejxthxyvgsbzcbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841753.188419-602-239801534869822/AnsiballZ_ini_file.py'
Jan 31 06:42:33 compute-1 sudo[50459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:33 compute-1 python3.9[50461]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:42:33 compute-1 sudo[50459]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:34 compute-1 sudo[50611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqinhrttksdurkpynsasothqooxsfele ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841753.9510584-647-153006916833200/AnsiballZ_stat.py'
Jan 31 06:42:34 compute-1 sudo[50611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:34 compute-1 python3.9[50613]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:42:34 compute-1 sudo[50611]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:34 compute-1 sudo[50734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgcytvxwhpmgijvfxmuqtxmfnlnqzbhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841753.9510584-647-153006916833200/AnsiballZ_copy.py'
Jan 31 06:42:34 compute-1 sudo[50734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:35 compute-1 python3.9[50736]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841753.9510584-647-153006916833200/.source _original_basename=.frw38upo follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:42:35 compute-1 sudo[50734]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:35 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 06:42:35 compute-1 sudo[50889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyiftxhniryucyyizndirzrrfdtostcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841755.658156-692-125103670714539/AnsiballZ_file.py'
Jan 31 06:42:35 compute-1 sudo[50889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:36 compute-1 python3.9[50891]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:42:36 compute-1 sudo[50889]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:36 compute-1 sudo[51041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdwsrtkoxlwhixhokefstdxbsassayad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841756.282048-716-54054276263378/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 31 06:42:36 compute-1 sudo[51041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:36 compute-1 python3.9[51043]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 31 06:42:36 compute-1 sudo[51041]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:37 compute-1 sudo[51193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytofunexucfrowohphzcxftsgeqlwdys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841757.181182-743-151298057496467/AnsiballZ_file.py'
Jan 31 06:42:37 compute-1 sudo[51193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:37 compute-1 python3.9[51195]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:42:37 compute-1 sudo[51193]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:38 compute-1 sudo[51345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mupgigwbbfvmgxluloagdudplaohnntg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841758.0192578-773-8347636716827/AnsiballZ_stat.py'
Jan 31 06:42:38 compute-1 sudo[51345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:38 compute-1 sudo[51345]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:38 compute-1 sudo[51468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukixquzvujounsrileanpomnwmzgbxox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841758.0192578-773-8347636716827/AnsiballZ_copy.py'
Jan 31 06:42:38 compute-1 sudo[51468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:39 compute-1 sudo[51468]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:39 compute-1 sudo[51620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thntolbexjlydzromopwxpswdygocyuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841759.2897844-818-247472806383621/AnsiballZ_slurp.py'
Jan 31 06:42:39 compute-1 sudo[51620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:39 compute-1 python3.9[51622]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 31 06:42:39 compute-1 sudo[51620]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:41 compute-1 sudo[51795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hotdryhhdjrakbzpczrirztigxrhaidh ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841760.1347873-845-241461129194486/async_wrapper.py j540160469757 300 /home/zuul/.ansible/tmp/ansible-tmp-1769841760.1347873-845-241461129194486/AnsiballZ_edpm_os_net_config.py _'
Jan 31 06:42:41 compute-1 sudo[51795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:41 compute-1 ansible-async_wrapper.py[51797]: Invoked with j540160469757 300 /home/zuul/.ansible/tmp/ansible-tmp-1769841760.1347873-845-241461129194486/AnsiballZ_edpm_os_net_config.py _
Jan 31 06:42:41 compute-1 ansible-async_wrapper.py[51800]: Starting module and watcher
Jan 31 06:42:41 compute-1 ansible-async_wrapper.py[51800]: Start watching 51801 (300)
Jan 31 06:42:41 compute-1 ansible-async_wrapper.py[51801]: Start module (51801)
Jan 31 06:42:41 compute-1 ansible-async_wrapper.py[51797]: Return async_wrapper task started.
Jan 31 06:42:41 compute-1 sudo[51795]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:41 compute-1 python3.9[51802]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 31 06:42:42 compute-1 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 31 06:42:42 compute-1 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 31 06:42:42 compute-1 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 31 06:42:42 compute-1 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 31 06:42:42 compute-1 kernel: cfg80211: failed to load regulatory.db
Jan 31 06:42:42 compute-1 NetworkManager[49028]: <info>  [1769841762.9736] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51803 uid=0 result="success"
Jan 31 06:42:42 compute-1 NetworkManager[49028]: <info>  [1769841762.9750] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51803 uid=0 result="success"
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.0203] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.0204] audit: op="connection-add" uuid="b856e0a5-2376-4770-8640-5d18defbb27a" name="br-ex-br" pid=51803 uid=0 result="success"
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.0220] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.0221] audit: op="connection-add" uuid="30a0d8ab-55aa-4740-bbc2-c888938bbd5c" name="br-ex-port" pid=51803 uid=0 result="success"
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.0234] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.0236] audit: op="connection-add" uuid="d26b2a9c-23ea-4afd-9e20-564c6b67a749" name="eth1-port" pid=51803 uid=0 result="success"
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.0250] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.0252] audit: op="connection-add" uuid="e806c127-cb15-4dec-90ae-4d1c96ee0712" name="vlan20-port" pid=51803 uid=0 result="success"
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.0265] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.0267] audit: op="connection-add" uuid="9ce42248-048b-468c-b8a7-7c0313b9c6ac" name="vlan21-port" pid=51803 uid=0 result="success"
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.0279] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.0280] audit: op="connection-add" uuid="0251e950-0aee-4ba4-8f4f-43346c0f7c90" name="vlan22-port" pid=51803 uid=0 result="success"
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.0291] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.0293] audit: op="connection-add" uuid="8a52aca3-6480-45f1-a7b9-09961a9bd375" name="vlan23-port" pid=51803 uid=0 result="success"
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.0312] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=51803 uid=0 result="success"
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.0328] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.0330] audit: op="connection-add" uuid="16881eb2-eba1-487c-9be0-aa5617045f5e" name="br-ex-if" pid=51803 uid=0 result="success"
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1383] audit: op="connection-update" uuid="5b3e436a-1bd9-50ca-b46e-b28c26ac73b3" name="ci-private-network" args="ipv6.routes,ipv6.addr-gen-mode,ipv6.addresses,ipv6.routing-rules,ipv6.dns,ipv6.method,ovs-interface.type,ipv4.routes,ipv4.addresses,ipv4.routing-rules,ipv4.never-default,ipv4.dns,ipv4.method,connection.controller,connection.master,connection.timestamp,connection.port-type,connection.slave-type,ovs-external-ids.data" pid=51803 uid=0 result="success"
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1401] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1402] audit: op="connection-add" uuid="398ff2d2-0621-4d5b-bfd9-d0ada7bfed14" name="vlan20-if" pid=51803 uid=0 result="success"
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1416] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1418] audit: op="connection-add" uuid="6ee34bbc-0a80-4935-aada-8428a13f79ba" name="vlan21-if" pid=51803 uid=0 result="success"
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1432] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1434] audit: op="connection-add" uuid="3007d6f3-7a95-492e-b7b9-1744958280ce" name="vlan22-if" pid=51803 uid=0 result="success"
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1447] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1449] audit: op="connection-add" uuid="bce5a29b-c3fa-4df0-8c7c-fcc7258df637" name="vlan23-if" pid=51803 uid=0 result="success"
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1459] audit: op="connection-delete" uuid="5ea74ddd-8858-33a0-ae1a-8a96f64e8d31" name="Wired connection 1" pid=51803 uid=0 result="success"
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1470] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <warn>  [1769841763.1473] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1479] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1482] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (b856e0a5-2376-4770-8640-5d18defbb27a)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1483] audit: op="connection-activate" uuid="b856e0a5-2376-4770-8640-5d18defbb27a" name="br-ex-br" pid=51803 uid=0 result="success"
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1484] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <warn>  [1769841763.1485] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1490] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1494] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (30a0d8ab-55aa-4740-bbc2-c888938bbd5c)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1495] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <warn>  [1769841763.1496] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1500] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1504] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (d26b2a9c-23ea-4afd-9e20-564c6b67a749)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1506] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <warn>  [1769841763.1506] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1512] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1515] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (e806c127-cb15-4dec-90ae-4d1c96ee0712)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1517] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <warn>  [1769841763.1518] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1522] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1526] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (9ce42248-048b-468c-b8a7-7c0313b9c6ac)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1528] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <warn>  [1769841763.1529] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1533] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1537] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (0251e950-0aee-4ba4-8f4f-43346c0f7c90)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1538] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <warn>  [1769841763.1539] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1544] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1547] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (8a52aca3-6480-45f1-a7b9-09961a9bd375)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1548] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1551] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1553] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1558] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <warn>  [1769841763.1559] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1562] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1565] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (16881eb2-eba1-487c-9be0-aa5617045f5e)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1566] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1569] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1571] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1572] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1573] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1582] device (eth1): disconnecting for new activation request.
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1583] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1586] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1587] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1589] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1591] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <warn>  [1769841763.1592] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1595] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1599] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (398ff2d2-0621-4d5b-bfd9-d0ada7bfed14)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1600] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1602] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1604] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1605] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1608] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <warn>  [1769841763.1609] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1612] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1616] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (6ee34bbc-0a80-4935-aada-8428a13f79ba)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1617] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1620] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1622] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1623] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1626] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <warn>  [1769841763.1626] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1630] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1634] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (3007d6f3-7a95-492e-b7b9-1744958280ce)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1634] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1637] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1639] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1640] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1643] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <warn>  [1769841763.1644] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1646] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1651] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (bce5a29b-c3fa-4df0-8c7c-fcc7258df637)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1651] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1654] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1656] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1657] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1659] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1669] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=51803 uid=0 result="success"
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1670] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1674] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1676] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1682] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1685] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1689] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1692] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1694] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 kernel: ovs-system: entered promiscuous mode
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1708] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1712] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1716] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 kernel: Timeout policy base is empty
Jan 31 06:42:43 compute-1 systemd-udevd[51807]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1718] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1725] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1729] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1732] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1734] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1739] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1742] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1744] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1745] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1749] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1752] dhcp4 (eth0): canceled DHCP transaction
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1752] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1752] dhcp4 (eth0): state changed no lease
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1754] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1762] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1765] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51803 uid=0 result="fail" reason="Device is not activated"
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1772] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 31 06:42:43 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1790] device (eth1): disconnecting for new activation request.
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.1791] audit: op="connection-activate" uuid="5b3e436a-1bd9-50ca-b46e-b28c26ac73b3" name="ci-private-network" pid=51803 uid=0 result="success"
Jan 31 06:42:43 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 06:42:43 compute-1 kernel: br-ex: entered promiscuous mode
Jan 31 06:42:43 compute-1 kernel: vlan20: entered promiscuous mode
Jan 31 06:42:43 compute-1 systemd-udevd[51808]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 06:42:43 compute-1 kernel: vlan21: entered promiscuous mode
Jan 31 06:42:43 compute-1 systemd-udevd[51809]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.2693] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.2707] dhcp4 (eth0): state changed new lease, address=38.102.83.128
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.2724] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.2734] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.2747] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 31 06:42:43 compute-1 kernel: vlan22: entered promiscuous mode
Jan 31 06:42:43 compute-1 systemd-udevd[51901]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.2767] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.2774] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 31 06:42:43 compute-1 kernel: vlan23: entered promiscuous mode
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4574] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51803 uid=0 result="success"
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4576] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4727] device (eth1): Activation: starting connection 'ci-private-network' (5b3e436a-1bd9-50ca-b46e-b28c26ac73b3)
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4733] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4735] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4736] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4738] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4739] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4741] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4742] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4761] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4774] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4777] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4780] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4784] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4789] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4793] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4795] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4798] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4800] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4803] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4805] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4808] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4810] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4813] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4815] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4818] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4821] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4840] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4850] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4853] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4871] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4877] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4881] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4891] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4894] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4899] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4904] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4907] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4911] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4914] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4917] device (eth1): Activation: successful, device activated.
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4920] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4921] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4921] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4922] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4924] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4926] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4930] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4933] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4937] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4940] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4945] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 06:42:43 compute-1 NetworkManager[49028]: <info>  [1769841763.4949] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 06:42:44 compute-1 sudo[52159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrtqdjbgavesalyxyokwxssykvookuoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841764.4055347-845-76445240836523/AnsiballZ_async_status.py'
Jan 31 06:42:44 compute-1 sudo[52159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:44 compute-1 NetworkManager[49028]: <info>  [1769841764.7388] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51803 uid=0 result="success"
Jan 31 06:42:44 compute-1 python3.9[52161]: ansible-ansible.legacy.async_status Invoked with jid=j540160469757.51797 mode=status _async_dir=/root/.ansible_async
Jan 31 06:42:44 compute-1 sudo[52159]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:44 compute-1 NetworkManager[49028]: <info>  [1769841764.8890] checkpoint[0x55ded8f0a950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 31 06:42:44 compute-1 NetworkManager[49028]: <info>  [1769841764.8892] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51803 uid=0 result="success"
Jan 31 06:42:45 compute-1 NetworkManager[49028]: <info>  [1769841765.2265] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51803 uid=0 result="success"
Jan 31 06:42:45 compute-1 NetworkManager[49028]: <info>  [1769841765.2276] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51803 uid=0 result="success"
Jan 31 06:42:45 compute-1 NetworkManager[49028]: <info>  [1769841765.5235] audit: op="networking-control" arg="global-dns-configuration" pid=51803 uid=0 result="success"
Jan 31 06:42:45 compute-1 NetworkManager[49028]: <info>  [1769841765.6219] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 31 06:42:45 compute-1 NetworkManager[49028]: <info>  [1769841765.6834] audit: op="networking-control" arg="global-dns-configuration" pid=51803 uid=0 result="success"
Jan 31 06:42:45 compute-1 NetworkManager[49028]: <info>  [1769841765.6870] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51803 uid=0 result="success"
Jan 31 06:42:45 compute-1 NetworkManager[49028]: <info>  [1769841765.8273] checkpoint[0x55ded8f0aa20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 31 06:42:45 compute-1 NetworkManager[49028]: <info>  [1769841765.8278] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51803 uid=0 result="success"
Jan 31 06:42:45 compute-1 ansible-async_wrapper.py[51801]: Module complete (51801)
Jan 31 06:42:46 compute-1 ansible-async_wrapper.py[51800]: Done in kid B.
Jan 31 06:42:48 compute-1 sudo[52265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgwaghnbbnflyxvraxfscfflsexdwhxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841764.4055347-845-76445240836523/AnsiballZ_async_status.py'
Jan 31 06:42:48 compute-1 sudo[52265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:48 compute-1 python3.9[52267]: ansible-ansible.legacy.async_status Invoked with jid=j540160469757.51797 mode=status _async_dir=/root/.ansible_async
Jan 31 06:42:48 compute-1 sudo[52265]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:48 compute-1 sudo[52365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxtklsotaocbmgvvvwfmqoczelhmlmdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841764.4055347-845-76445240836523/AnsiballZ_async_status.py'
Jan 31 06:42:48 compute-1 sudo[52365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:48 compute-1 python3.9[52367]: ansible-ansible.legacy.async_status Invoked with jid=j540160469757.51797 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 06:42:48 compute-1 sudo[52365]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:49 compute-1 sudo[52517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhutkjtumflqasfhqiuhkqrgctcqxotu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841768.9463897-926-48118800104921/AnsiballZ_stat.py'
Jan 31 06:42:49 compute-1 sudo[52517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:49 compute-1 python3.9[52519]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:42:49 compute-1 sudo[52517]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:49 compute-1 sudo[52640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myjhfaeotmshujbcwaaeabsbygrdbxkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841768.9463897-926-48118800104921/AnsiballZ_copy.py'
Jan 31 06:42:49 compute-1 sudo[52640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:49 compute-1 python3.9[52642]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841768.9463897-926-48118800104921/.source.returncode _original_basename=.z_t2zn81 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:42:49 compute-1 sudo[52640]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:50 compute-1 sudo[52792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zykauspmmbcyqxeivuiadiwieyrwvnlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841770.295813-974-106689423844983/AnsiballZ_stat.py'
Jan 31 06:42:50 compute-1 sudo[52792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:50 compute-1 python3.9[52794]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:42:50 compute-1 sudo[52792]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:51 compute-1 sudo[52915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvpeaxjmmcpluubhcwxwpmuefmcqmoio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841770.295813-974-106689423844983/AnsiballZ_copy.py'
Jan 31 06:42:51 compute-1 sudo[52915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:51 compute-1 python3.9[52917]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841770.295813-974-106689423844983/.source.cfg _original_basename=.3nak1vo1 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:42:51 compute-1 sudo[52915]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:51 compute-1 sudo[53068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofxlruvpfumiwqwpgodtrzhakefjjbat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841771.5138175-1019-237976429753260/AnsiballZ_systemd.py'
Jan 31 06:42:51 compute-1 sudo[53068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:42:52 compute-1 python3.9[53070]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:42:52 compute-1 systemd[1]: Reloading Network Manager...
Jan 31 06:42:52 compute-1 NetworkManager[49028]: <info>  [1769841772.1204] audit: op="reload" arg="0" pid=53074 uid=0 result="success"
Jan 31 06:42:52 compute-1 NetworkManager[49028]: <info>  [1769841772.1209] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 31 06:42:52 compute-1 systemd[1]: Reloaded Network Manager.
Jan 31 06:42:52 compute-1 sudo[53068]: pam_unix(sudo:session): session closed for user root
Jan 31 06:42:52 compute-1 sshd-session[45025]: Connection closed by 192.168.122.30 port 54002
Jan 31 06:42:52 compute-1 sshd-session[45022]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:42:52 compute-1 systemd-logind[788]: Session 11 logged out. Waiting for processes to exit.
Jan 31 06:42:52 compute-1 systemd[1]: session-11.scope: Deactivated successfully.
Jan 31 06:42:52 compute-1 systemd[1]: session-11.scope: Consumed 47.165s CPU time.
Jan 31 06:42:52 compute-1 systemd-logind[788]: Removed session 11.
Jan 31 06:42:57 compute-1 sshd-session[53104]: Accepted publickey for zuul from 192.168.122.30 port 52228 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:42:57 compute-1 systemd-logind[788]: New session 12 of user zuul.
Jan 31 06:42:57 compute-1 systemd[1]: Started Session 12 of User zuul.
Jan 31 06:42:57 compute-1 sshd-session[53104]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:42:58 compute-1 python3.9[53258]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:42:59 compute-1 python3.9[53412]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:43:00 compute-1 python3.9[53605]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:43:01 compute-1 sshd-session[53107]: Connection closed by 192.168.122.30 port 52228
Jan 31 06:43:01 compute-1 sshd-session[53104]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:43:01 compute-1 systemd-logind[788]: Session 12 logged out. Waiting for processes to exit.
Jan 31 06:43:01 compute-1 systemd[1]: session-12.scope: Deactivated successfully.
Jan 31 06:43:01 compute-1 systemd[1]: session-12.scope: Consumed 1.854s CPU time.
Jan 31 06:43:01 compute-1 systemd-logind[788]: Removed session 12.
Jan 31 06:43:02 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 06:43:06 compute-1 sshd-session[53634]: Accepted publickey for zuul from 192.168.122.30 port 52236 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:43:06 compute-1 systemd-logind[788]: New session 13 of user zuul.
Jan 31 06:43:06 compute-1 systemd[1]: Started Session 13 of User zuul.
Jan 31 06:43:06 compute-1 sshd-session[53634]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:43:07 compute-1 python3.9[53787]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:43:08 compute-1 python3.9[53942]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:43:09 compute-1 sudo[54096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lerbhjuhvnhomfktegbcxjihusjosydh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841788.9686375-81-239896074874367/AnsiballZ_setup.py'
Jan 31 06:43:09 compute-1 sudo[54096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:09 compute-1 python3.9[54098]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:43:09 compute-1 sudo[54096]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:10 compute-1 sudo[54180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wewnmgwqeodtrounkxdhtsuraiedfzxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841788.9686375-81-239896074874367/AnsiballZ_dnf.py'
Jan 31 06:43:10 compute-1 sudo[54180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:10 compute-1 python3.9[54182]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:43:11 compute-1 sudo[54180]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:12 compute-1 sudo[54334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bisvyarwvpvkxjbwmcsdphizhshohuck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841792.2515504-117-5239470179075/AnsiballZ_setup.py'
Jan 31 06:43:12 compute-1 sudo[54334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:12 compute-1 python3.9[54336]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:43:12 compute-1 sudo[54334]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:13 compute-1 sudo[54529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxqbssxrhcwrnzbkuvneqbganrfbrwad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841793.3674743-150-247969999194853/AnsiballZ_file.py'
Jan 31 06:43:13 compute-1 sudo[54529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:13 compute-1 python3.9[54531]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:43:13 compute-1 sudo[54529]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:14 compute-1 sudo[54682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjuojdbolbcmaxytbhgjbwugtgjovuqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841794.0859962-174-217316495098588/AnsiballZ_command.py'
Jan 31 06:43:14 compute-1 sudo[54682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:14 compute-1 python3.9[54684]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:43:15 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat1259554465-merged.mount: Deactivated successfully.
Jan 31 06:43:15 compute-1 podman[54685]: 2026-01-31 06:43:15.693684577 +0000 UTC m=+0.929297137 system refresh
Jan 31 06:43:15 compute-1 sudo[54682]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:16 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 06:43:16 compute-1 sudo[54844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xptugxxlbrqbgfiyqutjhybgzgsfdabw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841795.9126937-198-172048406398845/AnsiballZ_stat.py'
Jan 31 06:43:16 compute-1 sudo[54844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:16 compute-1 python3.9[54846]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:43:16 compute-1 sudo[54844]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:16 compute-1 sudo[54967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyhtltoacyxstzsgycmtigwuonquydit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841795.9126937-198-172048406398845/AnsiballZ_copy.py'
Jan 31 06:43:16 compute-1 sudo[54967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:17 compute-1 python3.9[54969]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841795.9126937-198-172048406398845/.source.json follow=False _original_basename=podman_network_config.j2 checksum=aaff5e195a1eb43880980a1d59c41d494a2c8efd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:43:17 compute-1 sudo[54967]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:17 compute-1 sudo[55119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqljiicndnszvtxyrltdnywcyxiymkni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841797.3120825-243-240436990264405/AnsiballZ_stat.py'
Jan 31 06:43:17 compute-1 sudo[55119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:17 compute-1 python3.9[55121]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:43:17 compute-1 sudo[55119]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:18 compute-1 sudo[55242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeqcywktrimmosckkwhxfclomyrfkrhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841797.3120825-243-240436990264405/AnsiballZ_copy.py'
Jan 31 06:43:18 compute-1 sudo[55242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:18 compute-1 python3.9[55244]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769841797.3120825-243-240436990264405/.source.conf follow=False _original_basename=registries.conf.j2 checksum=e5b84cbf5536d1747818507bbe53a53ed67676dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:43:18 compute-1 sudo[55242]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:18 compute-1 sudo[55394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsjgcndeeznjfrqgueeytmkonthramla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841798.536372-291-59609090452381/AnsiballZ_ini_file.py'
Jan 31 06:43:18 compute-1 sudo[55394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:19 compute-1 python3.9[55396]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:43:19 compute-1 sudo[55394]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:19 compute-1 sudo[55546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxbsgzghvzjjtsnoihrnvphwzsxcpssh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841799.24453-291-167746631655220/AnsiballZ_ini_file.py'
Jan 31 06:43:19 compute-1 sudo[55546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:19 compute-1 python3.9[55548]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:43:19 compute-1 sudo[55546]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:19 compute-1 sudo[55698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxkbpqcpbrdzvifoaapyxadohxxsyyew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841799.7654784-291-153677522386388/AnsiballZ_ini_file.py'
Jan 31 06:43:19 compute-1 sudo[55698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:20 compute-1 python3.9[55700]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:43:20 compute-1 sudo[55698]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:20 compute-1 sudo[55850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cckquriyufqkylzskckfwoocvurdzvjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841800.3205945-291-78078416187133/AnsiballZ_ini_file.py'
Jan 31 06:43:20 compute-1 sudo[55850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:20 compute-1 python3.9[55852]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:43:20 compute-1 sudo[55850]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:21 compute-1 sudo[56002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygfzbarzxepjwlcivhyasbxsvqkvuroa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841801.2872024-384-169594284425681/AnsiballZ_dnf.py'
Jan 31 06:43:21 compute-1 sudo[56002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:21 compute-1 python3.9[56004]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:43:23 compute-1 sudo[56002]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:23 compute-1 sudo[56155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhokajieusfhgcmmshkwjwfweqnixwfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841803.6532738-417-35120569918639/AnsiballZ_setup.py'
Jan 31 06:43:23 compute-1 sudo[56155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:24 compute-1 python3.9[56157]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:43:24 compute-1 sudo[56155]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:24 compute-1 sudo[56309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmfftpvfofpcdrmzjakhrsqmipafbwnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841804.6385977-441-179360680578895/AnsiballZ_stat.py'
Jan 31 06:43:24 compute-1 sudo[56309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:25 compute-1 python3.9[56311]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:43:25 compute-1 sudo[56309]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:25 compute-1 sudo[56461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhslduolojllgffqtierriyaeimzgcbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841805.3715703-468-85530005904893/AnsiballZ_stat.py'
Jan 31 06:43:25 compute-1 sudo[56461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:25 compute-1 python3.9[56463]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:43:25 compute-1 sudo[56461]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:26 compute-1 sudo[56613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtdwdcutwrhflexkcfwlnrlqeduuzsom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841806.2181473-498-179109242057618/AnsiballZ_command.py'
Jan 31 06:43:26 compute-1 sudo[56613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:26 compute-1 python3.9[56615]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:43:26 compute-1 sudo[56613]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:27 compute-1 sudo[56766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igmplgmsmllqpahlxzmuwksnwyrwufhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841807.0679467-528-184541070600635/AnsiballZ_service_facts.py'
Jan 31 06:43:27 compute-1 sudo[56766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:27 compute-1 python3.9[56768]: ansible-service_facts Invoked
Jan 31 06:43:27 compute-1 network[56785]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 06:43:27 compute-1 network[56786]: 'network-scripts' will be removed from distribution in near future.
Jan 31 06:43:27 compute-1 network[56787]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 06:43:29 compute-1 sudo[56766]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:30 compute-1 sudo[57070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdasobulqbilunmnrnfmzwejibdhbptp ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769841810.4410622-573-253218329525730/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769841810.4410622-573-253218329525730/args'
Jan 31 06:43:30 compute-1 sudo[57070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:30 compute-1 sudo[57070]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:31 compute-1 sudo[57237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxuyzbctoyqmxppzcrpkofoixipuiqwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841811.121684-606-272422024660935/AnsiballZ_dnf.py'
Jan 31 06:43:31 compute-1 sudo[57237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:31 compute-1 python3.9[57239]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:43:33 compute-1 sudo[57237]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:34 compute-1 sudo[57390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncjysvmmhzxtaovleftueznqfhhgmkvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841813.9687667-645-146259286024755/AnsiballZ_package_facts.py'
Jan 31 06:43:34 compute-1 sudo[57390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:34 compute-1 python3.9[57392]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 06:43:35 compute-1 sudo[57390]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:36 compute-1 sudo[57542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raegwuvbnqbmwkaqtkcildzvhupylpfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841815.8248534-675-29756192743829/AnsiballZ_stat.py'
Jan 31 06:43:36 compute-1 sudo[57542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:36 compute-1 python3.9[57544]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:43:36 compute-1 sudo[57542]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:36 compute-1 sudo[57667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcdvbykafxroddxbejlaqgxrgmcrkkyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841815.8248534-675-29756192743829/AnsiballZ_copy.py'
Jan 31 06:43:36 compute-1 sudo[57667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:37 compute-1 python3.9[57669]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841815.8248534-675-29756192743829/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:43:37 compute-1 sudo[57667]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:37 compute-1 sudo[57821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkwrlsyuqwxugtcwwdocoacongwpjugv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841817.269613-721-3956995657944/AnsiballZ_stat.py'
Jan 31 06:43:37 compute-1 sudo[57821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:37 compute-1 python3.9[57823]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:43:37 compute-1 sudo[57821]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:38 compute-1 sudo[57946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhfjrdylncdwmnhryhauoiqotigsvvbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841817.269613-721-3956995657944/AnsiballZ_copy.py'
Jan 31 06:43:38 compute-1 sudo[57946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:38 compute-1 python3.9[57948]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841817.269613-721-3956995657944/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:43:38 compute-1 sudo[57946]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:39 compute-1 sudo[58100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuffymvezmveuomehqgqivgpkgryurzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841819.2521942-785-271709215557319/AnsiballZ_lineinfile.py'
Jan 31 06:43:39 compute-1 sudo[58100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:39 compute-1 python3.9[58102]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:43:40 compute-1 sudo[58100]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:42 compute-1 sudo[58254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhqxytqvjurabezxnqofgysdzvydfjgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841821.8445385-829-76811041843235/AnsiballZ_setup.py'
Jan 31 06:43:42 compute-1 sudo[58254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:42 compute-1 python3.9[58256]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:43:42 compute-1 sudo[58254]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:43 compute-1 sudo[58338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nndwxaarncrshkcjondfcfatkirqvfnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841821.8445385-829-76811041843235/AnsiballZ_systemd.py'
Jan 31 06:43:43 compute-1 sudo[58338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:43 compute-1 python3.9[58340]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:43:44 compute-1 sudo[58338]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:46 compute-1 sudo[58492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cribeluoskgxaumyfvyagqjokaflthdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841826.0126789-876-140367174223393/AnsiballZ_setup.py'
Jan 31 06:43:46 compute-1 sudo[58492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:46 compute-1 python3.9[58494]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:43:46 compute-1 sudo[58492]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:47 compute-1 sudo[58576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvsofiyopucfjcdczzxgtnyuzllkfslf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841826.0126789-876-140367174223393/AnsiballZ_systemd.py'
Jan 31 06:43:47 compute-1 sudo[58576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:47 compute-1 python3.9[58578]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:43:47 compute-1 chronyd[796]: chronyd exiting
Jan 31 06:43:47 compute-1 systemd[1]: Stopping NTP client/server...
Jan 31 06:43:47 compute-1 systemd[1]: chronyd.service: Deactivated successfully.
Jan 31 06:43:47 compute-1 systemd[1]: Stopped NTP client/server.
Jan 31 06:43:47 compute-1 systemd[1]: Starting NTP client/server...
Jan 31 06:43:47 compute-1 chronyd[58586]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 06:43:47 compute-1 chronyd[58586]: Frequency -26.398 +/- 0.165 ppm read from /var/lib/chrony/drift
Jan 31 06:43:47 compute-1 chronyd[58586]: Loaded seccomp filter (level 2)
Jan 31 06:43:47 compute-1 systemd[1]: Started NTP client/server.
Jan 31 06:43:47 compute-1 sudo[58576]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:48 compute-1 sshd-session[53637]: Connection closed by 192.168.122.30 port 52236
Jan 31 06:43:48 compute-1 sshd-session[53634]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:43:48 compute-1 systemd[1]: session-13.scope: Deactivated successfully.
Jan 31 06:43:48 compute-1 systemd[1]: session-13.scope: Consumed 22.046s CPU time.
Jan 31 06:43:48 compute-1 systemd-logind[788]: Session 13 logged out. Waiting for processes to exit.
Jan 31 06:43:48 compute-1 systemd-logind[788]: Removed session 13.
Jan 31 06:43:54 compute-1 sshd-session[58612]: Accepted publickey for zuul from 192.168.122.30 port 56526 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:43:54 compute-1 systemd-logind[788]: New session 14 of user zuul.
Jan 31 06:43:54 compute-1 systemd[1]: Started Session 14 of User zuul.
Jan 31 06:43:54 compute-1 sshd-session[58612]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:43:54 compute-1 sudo[58765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgjkyoumbdaexmkvsmbdgkxfaqvtdzgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841834.1073284-27-8008526964456/AnsiballZ_file.py'
Jan 31 06:43:54 compute-1 sudo[58765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:54 compute-1 python3.9[58767]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:43:54 compute-1 sudo[58765]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:55 compute-1 sudo[58917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsisgdtncmdsmtigctcqkddaegrfqqum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841834.9454412-63-208091987485696/AnsiballZ_stat.py'
Jan 31 06:43:55 compute-1 sudo[58917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:55 compute-1 python3.9[58919]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:43:55 compute-1 sudo[58917]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:55 compute-1 sudo[59040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zopdfvvdahoqgqdykpltoknhynnrhqxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841834.9454412-63-208091987485696/AnsiballZ_copy.py'
Jan 31 06:43:56 compute-1 sudo[59040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:43:56 compute-1 python3.9[59042]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841834.9454412-63-208091987485696/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:43:56 compute-1 sudo[59040]: pam_unix(sudo:session): session closed for user root
Jan 31 06:43:56 compute-1 sshd-session[58615]: Connection closed by 192.168.122.30 port 56526
Jan 31 06:43:56 compute-1 sshd-session[58612]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:43:56 compute-1 systemd[1]: session-14.scope: Deactivated successfully.
Jan 31 06:43:56 compute-1 systemd[1]: session-14.scope: Consumed 1.423s CPU time.
Jan 31 06:43:56 compute-1 systemd-logind[788]: Session 14 logged out. Waiting for processes to exit.
Jan 31 06:43:56 compute-1 systemd-logind[788]: Removed session 14.
Jan 31 06:44:02 compute-1 sshd-session[59067]: Accepted publickey for zuul from 192.168.122.30 port 38228 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:44:02 compute-1 systemd-logind[788]: New session 15 of user zuul.
Jan 31 06:44:02 compute-1 systemd[1]: Started Session 15 of User zuul.
Jan 31 06:44:02 compute-1 sshd-session[59067]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:44:03 compute-1 python3.9[59220]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:44:04 compute-1 sudo[59374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvscjkwwwghngdcaocfznowvedrpjtfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841843.6433625-60-264802224638183/AnsiballZ_file.py'
Jan 31 06:44:04 compute-1 sudo[59374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:04 compute-1 python3.9[59376]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:04 compute-1 sudo[59374]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:04 compute-1 sudo[59549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxijzwabqzhnvxiycmidxbvjzqwxspgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841844.5177941-84-225916064488253/AnsiballZ_stat.py'
Jan 31 06:44:04 compute-1 sudo[59549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:05 compute-1 python3.9[59551]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:44:05 compute-1 sudo[59549]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:05 compute-1 sudo[59672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtsfxeefyubzgurzqjrwrabivpclodwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841844.5177941-84-225916064488253/AnsiballZ_copy.py'
Jan 31 06:44:05 compute-1 sudo[59672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:05 compute-1 python3.9[59674]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769841844.5177941-84-225916064488253/.source.json _original_basename=.fgonlmru follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:05 compute-1 sudo[59672]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:06 compute-1 sudo[59824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezxymvfmgnndegcicungivcjnityhosh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841846.405104-153-22909657935632/AnsiballZ_stat.py'
Jan 31 06:44:06 compute-1 sudo[59824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:06 compute-1 python3.9[59826]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:44:06 compute-1 sudo[59824]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:07 compute-1 sudo[59947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfrjxdzpoepzoulqilfveyhnxydympfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841846.405104-153-22909657935632/AnsiballZ_copy.py'
Jan 31 06:44:07 compute-1 sudo[59947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:07 compute-1 python3.9[59949]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841846.405104-153-22909657935632/.source _original_basename=.gmnshwtv follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:07 compute-1 sudo[59947]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:07 compute-1 sudo[60099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpxcxqnaylalojlmfiqpsjviaslodtrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841847.728222-201-205694900568614/AnsiballZ_file.py'
Jan 31 06:44:07 compute-1 sudo[60099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:08 compute-1 python3.9[60101]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:44:08 compute-1 sudo[60099]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:08 compute-1 sudo[60251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inuvughdfcalzuqppmlxqkiqijzrzwfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841848.3772602-225-205744552218359/AnsiballZ_stat.py'
Jan 31 06:44:08 compute-1 sudo[60251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:08 compute-1 python3.9[60253]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:44:08 compute-1 sudo[60251]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:09 compute-1 sudo[60374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trzfqprmuqbogybghjkclnawdkftxdaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841848.3772602-225-205744552218359/AnsiballZ_copy.py'
Jan 31 06:44:09 compute-1 sudo[60374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:09 compute-1 python3.9[60376]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769841848.3772602-225-205744552218359/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:44:09 compute-1 sudo[60374]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:09 compute-1 sudo[60526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mktdxpjhbgxtfdetddeeztwfmgufsqub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841849.4684632-225-59443580330307/AnsiballZ_stat.py'
Jan 31 06:44:09 compute-1 sudo[60526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:09 compute-1 python3.9[60528]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:44:09 compute-1 sudo[60526]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:10 compute-1 sudo[60649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jntbkncvqwvuoizifjhlqcbhjvanjoma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841849.4684632-225-59443580330307/AnsiballZ_copy.py'
Jan 31 06:44:10 compute-1 sudo[60649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:10 compute-1 python3.9[60651]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769841849.4684632-225-59443580330307/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:44:10 compute-1 sudo[60649]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:10 compute-1 sudo[60801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnylknakyfebivuvcgaglwzzeouxllab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841850.5500944-312-152396212604428/AnsiballZ_file.py'
Jan 31 06:44:10 compute-1 sudo[60801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:10 compute-1 python3.9[60803]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:10 compute-1 sudo[60801]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:11 compute-1 sudo[60953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvspgyriqxzonpwjmuzbkxfqpvxspmxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841851.2536395-336-253531885919284/AnsiballZ_stat.py'
Jan 31 06:44:11 compute-1 sudo[60953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:11 compute-1 python3.9[60955]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:44:11 compute-1 sudo[60953]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:12 compute-1 sudo[61076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whxmvhwnvsntshrrluqeuovderhiyinx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841851.2536395-336-253531885919284/AnsiballZ_copy.py'
Jan 31 06:44:12 compute-1 sudo[61076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:12 compute-1 python3.9[61078]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841851.2536395-336-253531885919284/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:12 compute-1 sudo[61076]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:12 compute-1 sudo[61228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcsedztzivszwulshimmqqeuenllcknb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841852.504154-381-263304715854970/AnsiballZ_stat.py'
Jan 31 06:44:12 compute-1 sudo[61228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:12 compute-1 python3.9[61230]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:44:12 compute-1 sudo[61228]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:13 compute-1 sudo[61351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daqkypmijkezaaxuiunxgsgeiwtansii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841852.504154-381-263304715854970/AnsiballZ_copy.py'
Jan 31 06:44:13 compute-1 sudo[61351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:13 compute-1 python3.9[61353]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841852.504154-381-263304715854970/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:13 compute-1 sudo[61351]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:14 compute-1 sudo[61503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pebglepefhhcwzdtutrckbhehlfrwrat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841853.7187095-426-197759987170748/AnsiballZ_systemd.py'
Jan 31 06:44:14 compute-1 sudo[61503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:14 compute-1 python3.9[61505]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:44:14 compute-1 systemd[1]: Reloading.
Jan 31 06:44:14 compute-1 systemd-sysv-generator[61536]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:44:14 compute-1 systemd-rc-local-generator[61532]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:44:14 compute-1 systemd[1]: Reloading.
Jan 31 06:44:14 compute-1 systemd-rc-local-generator[61565]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:44:14 compute-1 systemd-sysv-generator[61569]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:44:15 compute-1 systemd[1]: Starting EDPM Container Shutdown...
Jan 31 06:44:15 compute-1 systemd[1]: Finished EDPM Container Shutdown.
Jan 31 06:44:15 compute-1 sudo[61503]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:15 compute-1 sudo[61731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwdbwdquqxxxqilwaxdhhelwnziukluv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841855.3714936-450-91027854243693/AnsiballZ_stat.py'
Jan 31 06:44:15 compute-1 sudo[61731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:15 compute-1 python3.9[61733]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:44:15 compute-1 sudo[61731]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:16 compute-1 sudo[61854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nteqqjnmbvxfttilquajjzanpscoztjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841855.3714936-450-91027854243693/AnsiballZ_copy.py'
Jan 31 06:44:16 compute-1 sudo[61854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:16 compute-1 python3.9[61856]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841855.3714936-450-91027854243693/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:16 compute-1 sudo[61854]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:16 compute-1 sudo[62006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vawkmsnsoaykdoorhcqvsjdpwvjvxddc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841856.7020338-495-129996121587982/AnsiballZ_stat.py'
Jan 31 06:44:16 compute-1 sudo[62006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:17 compute-1 python3.9[62008]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:44:17 compute-1 sudo[62006]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:17 compute-1 sudo[62129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwfykrsuuokjomopduqujweefkmempdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841856.7020338-495-129996121587982/AnsiballZ_copy.py'
Jan 31 06:44:17 compute-1 sudo[62129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:17 compute-1 python3.9[62131]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841856.7020338-495-129996121587982/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:17 compute-1 sudo[62129]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:18 compute-1 sudo[62281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ursubfccgtdmjknsbusepxznzeqzdivw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841857.8867376-540-184653447549619/AnsiballZ_systemd.py'
Jan 31 06:44:18 compute-1 sudo[62281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:18 compute-1 python3.9[62283]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:44:18 compute-1 systemd[1]: Reloading.
Jan 31 06:44:18 compute-1 systemd-rc-local-generator[62307]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:44:18 compute-1 systemd-sysv-generator[62313]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:44:18 compute-1 systemd[1]: Reloading.
Jan 31 06:44:18 compute-1 systemd-rc-local-generator[62344]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:44:18 compute-1 systemd-sysv-generator[62349]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:44:18 compute-1 systemd[1]: Starting Create netns directory...
Jan 31 06:44:18 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 06:44:18 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 06:44:18 compute-1 systemd[1]: Finished Create netns directory.
Jan 31 06:44:19 compute-1 sudo[62281]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:19 compute-1 python3.9[62509]: ansible-ansible.builtin.service_facts Invoked
Jan 31 06:44:19 compute-1 network[62526]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 06:44:19 compute-1 network[62527]: 'network-scripts' will be removed from distribution in near future.
Jan 31 06:44:19 compute-1 network[62528]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 06:44:22 compute-1 sudo[62788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phdmwnufldphvtjtrdpqwhyesemlcjkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841862.160281-588-6095499458559/AnsiballZ_systemd.py'
Jan 31 06:44:22 compute-1 sudo[62788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:22 compute-1 python3.9[62790]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:44:22 compute-1 systemd[1]: Reloading.
Jan 31 06:44:22 compute-1 systemd-rc-local-generator[62814]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:44:22 compute-1 systemd-sysv-generator[62820]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:44:22 compute-1 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 31 06:44:23 compute-1 iptables.init[62829]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 31 06:44:23 compute-1 iptables.init[62829]: iptables: Flushing firewall rules: [  OK  ]
Jan 31 06:44:23 compute-1 systemd[1]: iptables.service: Deactivated successfully.
Jan 31 06:44:23 compute-1 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 31 06:44:23 compute-1 sudo[62788]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:23 compute-1 sudo[63023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zakzfhomfqibufzltylcapocekkopxrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841863.5012228-588-252548616483185/AnsiballZ_systemd.py'
Jan 31 06:44:23 compute-1 sudo[63023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:24 compute-1 python3.9[63025]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:44:24 compute-1 sudo[63023]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:24 compute-1 sudo[63177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiwxxezkzfpknjpetgissultadcmtxqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841864.3746712-636-140344260159653/AnsiballZ_systemd.py'
Jan 31 06:44:24 compute-1 sudo[63177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:24 compute-1 python3.9[63179]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:44:24 compute-1 systemd[1]: Reloading.
Jan 31 06:44:24 compute-1 systemd-rc-local-generator[63205]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:44:24 compute-1 systemd-sysv-generator[63212]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:44:25 compute-1 systemd[1]: Starting Netfilter Tables...
Jan 31 06:44:25 compute-1 systemd[1]: Finished Netfilter Tables.
Jan 31 06:44:25 compute-1 sudo[63177]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:25 compute-1 sudo[63369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkfudflvjpaubzpqfakbwozizixonmoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841865.4110243-660-209457942161101/AnsiballZ_command.py'
Jan 31 06:44:25 compute-1 sudo[63369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:25 compute-1 python3.9[63371]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:44:25 compute-1 sudo[63369]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:26 compute-1 sudo[63522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phwnbnysklcuoevjsruoqjiigipuydyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841866.6157181-702-113895616031878/AnsiballZ_stat.py'
Jan 31 06:44:26 compute-1 sudo[63522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:27 compute-1 python3.9[63524]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:44:27 compute-1 sudo[63522]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:27 compute-1 sudo[63647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukptcbqkwxqzihspihwzaamyogitjozy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841866.6157181-702-113895616031878/AnsiballZ_copy.py'
Jan 31 06:44:27 compute-1 sudo[63647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:27 compute-1 python3.9[63649]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841866.6157181-702-113895616031878/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:27 compute-1 sudo[63647]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:28 compute-1 sudo[63800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fehddxucpgtkzemkihcklatxtapishbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841867.8053594-747-245250090264211/AnsiballZ_systemd.py'
Jan 31 06:44:28 compute-1 sudo[63800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:28 compute-1 python3.9[63802]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:44:28 compute-1 systemd[1]: Reloading OpenSSH server daemon...
Jan 31 06:44:28 compute-1 systemd[1]: Reloaded OpenSSH server daemon.
Jan 31 06:44:28 compute-1 sshd[1005]: Received SIGHUP; restarting.
Jan 31 06:44:28 compute-1 sshd[1005]: Server listening on 0.0.0.0 port 22.
Jan 31 06:44:28 compute-1 sshd[1005]: Server listening on :: port 22.
Jan 31 06:44:28 compute-1 sudo[63800]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:28 compute-1 sudo[63956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kriaxqxmtesfamybadheenpwfcmjfflx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841868.702272-771-51018519656077/AnsiballZ_file.py'
Jan 31 06:44:28 compute-1 sudo[63956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:29 compute-1 python3.9[63958]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:29 compute-1 sudo[63956]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:29 compute-1 sudo[64108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybukohlwrpgjzsgrsgznkzbeueaotkzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841869.3160677-795-114854790538377/AnsiballZ_stat.py'
Jan 31 06:44:29 compute-1 sudo[64108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:29 compute-1 python3.9[64110]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:44:29 compute-1 sudo[64108]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:30 compute-1 sudo[64231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nddcsarmphvzvgkvkaecpcbrorrhqydf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841869.3160677-795-114854790538377/AnsiballZ_copy.py'
Jan 31 06:44:30 compute-1 sudo[64231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:30 compute-1 python3.9[64233]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841869.3160677-795-114854790538377/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:30 compute-1 sudo[64231]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:31 compute-1 sudo[64383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eklgsestckgbhhqgasazxmxoatsobzfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841870.9573042-849-31132962577952/AnsiballZ_timezone.py'
Jan 31 06:44:31 compute-1 sudo[64383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:31 compute-1 python3.9[64385]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 06:44:31 compute-1 systemd[1]: Starting Time & Date Service...
Jan 31 06:44:31 compute-1 systemd[1]: Started Time & Date Service.
Jan 31 06:44:31 compute-1 sudo[64383]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:32 compute-1 sudo[64539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixpiwxobhlluomaylqguuzquaquxloth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841872.4913814-876-66384752879253/AnsiballZ_file.py'
Jan 31 06:44:32 compute-1 sudo[64539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:32 compute-1 python3.9[64541]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:32 compute-1 sudo[64539]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:33 compute-1 sudo[64691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoeowihrmpbjkemenjwtpcgwryvscdxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841873.200292-900-8550834287697/AnsiballZ_stat.py'
Jan 31 06:44:33 compute-1 sudo[64691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:33 compute-1 python3.9[64693]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:44:33 compute-1 sudo[64691]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:33 compute-1 sudo[64814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlpjbnplhbdadptmgsummyikvcxxuwwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841873.200292-900-8550834287697/AnsiballZ_copy.py'
Jan 31 06:44:33 compute-1 sudo[64814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:34 compute-1 python3.9[64816]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841873.200292-900-8550834287697/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:34 compute-1 sudo[64814]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:34 compute-1 sudo[64966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-punufjmveycegnigpdhkciipneltjkxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841874.404304-945-21102867814548/AnsiballZ_stat.py'
Jan 31 06:44:34 compute-1 sudo[64966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:34 compute-1 python3.9[64968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:44:34 compute-1 sudo[64966]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:35 compute-1 sudo[65089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aftdyqearcxdpjkazecvqfskamaairfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841874.404304-945-21102867814548/AnsiballZ_copy.py'
Jan 31 06:44:35 compute-1 sudo[65089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:35 compute-1 python3.9[65091]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841874.404304-945-21102867814548/.source.yaml _original_basename=.ght5m8ov follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:35 compute-1 sudo[65089]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:35 compute-1 sudo[65241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjvbvuavhfbvmshmosrvxwqjzfxvvecg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841875.5941565-990-210752084389095/AnsiballZ_stat.py'
Jan 31 06:44:35 compute-1 sudo[65241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:36 compute-1 python3.9[65243]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:44:36 compute-1 sudo[65241]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:36 compute-1 sudo[65364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlghdldebpuamqfauipnnouuzmsjgqlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841875.5941565-990-210752084389095/AnsiballZ_copy.py'
Jan 31 06:44:36 compute-1 sudo[65364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:36 compute-1 python3.9[65366]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841875.5941565-990-210752084389095/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:36 compute-1 sudo[65364]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:37 compute-1 sudo[65516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zedmtviglgabtasdelvwwqymwqycpqaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841876.838338-1035-20850270403231/AnsiballZ_command.py'
Jan 31 06:44:37 compute-1 sudo[65516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:37 compute-1 python3.9[65518]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:44:37 compute-1 sudo[65516]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:37 compute-1 sudo[65669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyqgivftzugcczofnakdpciqtvwgtyih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841877.5530353-1059-207472838982251/AnsiballZ_command.py'
Jan 31 06:44:37 compute-1 sudo[65669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:37 compute-1 python3.9[65671]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:44:37 compute-1 sudo[65669]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:38 compute-1 sudo[65822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnzwvbuvglrzyydajaxyfjsinrdsuoei ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769841878.1817486-1083-281266043580249/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 06:44:38 compute-1 sudo[65822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:38 compute-1 python3[65824]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 06:44:38 compute-1 sudo[65822]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:39 compute-1 sudo[65974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rydeqkkxhaqvkagpkbbtotmayhghfdrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841879.0528498-1107-39601834006674/AnsiballZ_stat.py'
Jan 31 06:44:39 compute-1 sudo[65974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:39 compute-1 python3.9[65976]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:44:39 compute-1 sudo[65974]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:39 compute-1 sudo[66097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpiypvhunyfrdosocjxtfmhimtaujuou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841879.0528498-1107-39601834006674/AnsiballZ_copy.py'
Jan 31 06:44:39 compute-1 sudo[66097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:39 compute-1 python3.9[66099]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841879.0528498-1107-39601834006674/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:39 compute-1 sudo[66097]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:40 compute-1 sudo[66249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqmrlyixmltnyadqaqwcvycmuoeewzub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841880.2792058-1152-184213610505428/AnsiballZ_stat.py'
Jan 31 06:44:40 compute-1 sudo[66249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:40 compute-1 python3.9[66251]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:44:40 compute-1 sudo[66249]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:41 compute-1 sudo[66372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdvtudmfozagybxaovvywgmjarmvwsqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841880.2792058-1152-184213610505428/AnsiballZ_copy.py'
Jan 31 06:44:41 compute-1 sudo[66372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:41 compute-1 python3.9[66374]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841880.2792058-1152-184213610505428/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:41 compute-1 sudo[66372]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:42 compute-1 sudo[66524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhmsfgsejetjkgsxfekftltsxqrwzbdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841881.8211434-1197-258302796339413/AnsiballZ_stat.py'
Jan 31 06:44:42 compute-1 sudo[66524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:42 compute-1 python3.9[66526]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:44:42 compute-1 sudo[66524]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:42 compute-1 sudo[66647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcqhuigaorbswcxfkrgwqxfhzelqrdsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841881.8211434-1197-258302796339413/AnsiballZ_copy.py'
Jan 31 06:44:42 compute-1 sudo[66647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:42 compute-1 python3.9[66649]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841881.8211434-1197-258302796339413/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:42 compute-1 sudo[66647]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:43 compute-1 sudo[66799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruekusumdyehuavbqkyvgnccndbuntpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841883.0205595-1243-266593746057568/AnsiballZ_stat.py'
Jan 31 06:44:43 compute-1 sudo[66799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:43 compute-1 python3.9[66801]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:44:43 compute-1 sudo[66799]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:43 compute-1 sudo[66922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuoekgjzsnclxvjvprsyntshzkwgbzwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841883.0205595-1243-266593746057568/AnsiballZ_copy.py'
Jan 31 06:44:43 compute-1 sudo[66922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:44 compute-1 python3.9[66924]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841883.0205595-1243-266593746057568/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:44 compute-1 sudo[66922]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:44 compute-1 sudo[67074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqskbprtjgkuyfjwlihdhhikbvhgtqln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841884.2620895-1287-34528745650280/AnsiballZ_stat.py'
Jan 31 06:44:44 compute-1 sudo[67074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:44 compute-1 python3.9[67076]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:44:44 compute-1 sudo[67074]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:45 compute-1 sudo[67197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myetiesryffkuxozvfwcslciknpkqtuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841884.2620895-1287-34528745650280/AnsiballZ_copy.py'
Jan 31 06:44:45 compute-1 sudo[67197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:45 compute-1 python3.9[67199]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769841884.2620895-1287-34528745650280/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:45 compute-1 sudo[67197]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:45 compute-1 sudo[67349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbgfanpirezfmvgyzdhsmwstzpnjsdqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841885.6408854-1332-172615334068768/AnsiballZ_file.py'
Jan 31 06:44:45 compute-1 sudo[67349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:46 compute-1 python3.9[67351]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:46 compute-1 sudo[67349]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:46 compute-1 sudo[67501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkmmmesdrcgctoqmafdicznmhgqdfmvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841886.3648636-1356-3504963885040/AnsiballZ_command.py'
Jan 31 06:44:46 compute-1 sudo[67501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:46 compute-1 python3.9[67503]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:44:46 compute-1 sudo[67501]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:47 compute-1 sudo[67660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnlzeyhvncxdoxomspwtilooidurvjlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841887.1123776-1380-126063755899467/AnsiballZ_blockinfile.py'
Jan 31 06:44:47 compute-1 sudo[67660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:47 compute-1 python3.9[67662]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:47 compute-1 sudo[67660]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:48 compute-1 sudo[67813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azzetpfblztetmcykbgfmqikfgsgouoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841888.0108228-1407-40733321670103/AnsiballZ_file.py'
Jan 31 06:44:48 compute-1 sudo[67813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:48 compute-1 python3.9[67815]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:48 compute-1 sudo[67813]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:48 compute-1 sudo[67965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tymfmoikugmdgzanccmccrmdnzdoqjog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841888.6919734-1407-15293060067778/AnsiballZ_file.py'
Jan 31 06:44:48 compute-1 sudo[67965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:49 compute-1 python3.9[67967]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:44:49 compute-1 sudo[67965]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:49 compute-1 sudo[68117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsdnyrnqpzhtqurmgkhcafztnrwpydhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841889.2969747-1452-28277768888487/AnsiballZ_mount.py'
Jan 31 06:44:49 compute-1 sudo[68117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:49 compute-1 python3.9[68119]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 06:44:50 compute-1 sudo[68117]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:50 compute-1 sudo[68270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbvpdkwatzfekskydavsmqkuccdnjpsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841890.1644359-1452-255967376341252/AnsiballZ_mount.py'
Jan 31 06:44:50 compute-1 sudo[68270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:44:50 compute-1 python3.9[68272]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 06:44:50 compute-1 sudo[68270]: pam_unix(sudo:session): session closed for user root
Jan 31 06:44:51 compute-1 sshd-session[59070]: Connection closed by 192.168.122.30 port 38228
Jan 31 06:44:51 compute-1 sshd-session[59067]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:44:51 compute-1 systemd[1]: session-15.scope: Deactivated successfully.
Jan 31 06:44:51 compute-1 systemd[1]: session-15.scope: Consumed 29.673s CPU time.
Jan 31 06:44:51 compute-1 systemd-logind[788]: Session 15 logged out. Waiting for processes to exit.
Jan 31 06:44:51 compute-1 systemd-logind[788]: Removed session 15.
Jan 31 06:45:00 compute-1 sshd-session[68298]: Accepted publickey for zuul from 192.168.122.30 port 45586 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:45:00 compute-1 systemd-logind[788]: New session 16 of user zuul.
Jan 31 06:45:00 compute-1 systemd[1]: Started Session 16 of User zuul.
Jan 31 06:45:00 compute-1 sshd-session[68298]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:45:00 compute-1 sudo[68451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkdcaqgliqeirlcfdryjyaiuqrimawrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841900.2074695-24-79859387897117/AnsiballZ_tempfile.py'
Jan 31 06:45:00 compute-1 sudo[68451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:01 compute-1 python3.9[68453]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 06:45:01 compute-1 sudo[68451]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:01 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 06:45:01 compute-1 sudo[68605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhvrdsseyyaaftbkmhywfdlixeacrtmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841901.4919279-60-100712629652434/AnsiballZ_stat.py'
Jan 31 06:45:01 compute-1 sudo[68605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:02 compute-1 python3.9[68607]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:45:02 compute-1 sudo[68605]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:03 compute-1 sudo[68757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxjpchgocczohxvjpwxqcpscdrvvusas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841902.5989861-90-183125756436198/AnsiballZ_setup.py'
Jan 31 06:45:03 compute-1 sudo[68757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:03 compute-1 python3.9[68759]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:45:03 compute-1 sudo[68757]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:04 compute-1 sudo[68909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubbnaftdjxbtkhkqlsxfvifmgnaltrik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841903.7756128-115-257410783755138/AnsiballZ_blockinfile.py'
Jan 31 06:45:04 compute-1 sudo[68909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:04 compute-1 python3.9[68911]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7oaNruBF82m85jI32p4Mj+yn4T3FBHQ7cMc6lELq3AspplPtBQsmBgDfhjfVg1I4+kEqlqvMmBXvkZu7SGFPiUPQlioc6MCfPrB8/wSLBG/pEWqlStSpdkbOBEEivzl5kpIYrbNpwH3q/sL6mbZB4fYlpLP6SY4uxDutOWZutUUlzDguTJUprXhv8BnwgqPoBM7wwuPY+U9PSdLY8pxG40xO+UQ9llhK0rTX9Io1k8OtlJeJu/zVCmcEIp7bMmk4GLYHzfhe1JW7+O8RnNxmyEbfEZpJRKD+squSzbEC4jYJSF2ZIG9++KZY33LUAy3Krn46o8Bo+vBJX3HRYdgtGaejzyYimDJ2OPL+UB5K9tTqqKbQlmhZODmFmTVgZabEHzHSuT+dTFBmmzW17ll4cWYHemkonjSM+nl3zO9Quwp+HRmkAa5/uJIFeVLZInx7/aeHCar427H5OnfpuSLc1X9uSNlPAvvIdlXagkfCOLBFXlBSPhkDBqBq9MX7u0ic=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ4MRNp0lqMmdnWHkBaN0bYiu3NyVZLTvXbzAb78HL/H
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKrqTuBK9SuQu9hS9hBIqRv9weMcR5IS3TOGti2Gz24hxwuCxS2PuVSyWVacVoXmRrXt6Nl3b5KRQ35C6gTvbIU=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCux/eS/9tJWdvcz7CSqzbT3/CFFfMIoClo+OiLmW4DHDCsL7b4Sd8s4ZGetrM/b9d+nZhH3I0np2S0wkbf0kzxDpFnzV/CqSLPcHC1GFG8DlXIWkbbK3H9Nc+il8eG2rceqOXs5LCS6H6lOeSAynOJd7kkW0euL4YtQcqH6/PCpvaHnyAXOL9+76w6apGzrWBRGSKGvwJiCrundYhP4TjMSlb6ITyIdF0bE1617p7zZOh+CQt6wB17bBAKL/ZR7qQsjbIhW1zwJ7R0NuWJrgxemGImJ3YRN+2WJ5UpNJxoMPkwC67IfW4avOTykueyK9cACQ/OLPMvhxBVzsBBfmV7Xl5RquVXDj1OrXfG+zVu5YV0+GEtmxZhptXdzBvMkDBAr3hRB/jE/GZeCx/d6eoA3vfyT7tFrBaunMaiIutt/GbmQBhPSqSrqgau7M8rqs7ocyOCZI3ezwskVMxOX8yCOVAib7rHUkj+I+B48V/7MXiHOkBpOBUmgGSiM2whUe8=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJiG2htD5mCqa+IIAJsjOKgNJpPNmrlfh2g7QGI6KcQd
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG8QHiFr+d3LEQcNktaGAAZTvvRlNt/N3ZuLInnbRWqbA8w9jqUbMmg6m0Yc2Z+a+4iHrAMgRl5PGiHtvzbSe78=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVmjyOgMrBcNkKRe/3MkTqg/LhVt3sOvBD2IwLvjJmLe3cxmmFlu3iixT4LIzRscHQxUt6EqOuAiYL2BapPTTPjEaB+TseppBVXIPZfjllMgVy8pSqsZa+MUsbI4pONfcoart2REu5ObJIPOSl3YDAkGB+rxeAE1BD+sYmdlKriC/2JkUcS6p03QSjQnukMP476+uzXmPHLvm7A9TJjN2Oa4FkgJFI8+gFZaKPpHzCdoYD8COI0LYpp49uJ0gHQ7E4AepcpNUZXBgEsYKntsF9J/md1b13dW0ucGniV3eVxfWAH3xMRlwfFrT8TB+iQ74ghNmDEY/CCpZwkpL4W6bV7GT4+3nbvWIJv9/dgPSqeunTbbAWPEu6KM0nOuOGVRtQ6+q4aM3TRwV0DUvZptSGhRnHOekdOBRtiuMOnClub09PJMyOr4fKi3e59CfIx36NjxbNZfwA1j9jS3BDHL5BtATwiuTVMUWtdRYUT0h4zdmDtHkVnnPQBm2C3d7o/8c=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHThs9i/0cwyfrem5xVfEov0dwlVT7YQsUAzvhlKxVcU
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCPv7c3x32Z77V8zjbPteGtuwIl3HzfI8HP5le/fNUtef+zMbIe6oyaIlzMLTKYnfaTTkKeVwM+hyTawD64NkAc=
                                             create=True mode=0644 path=/tmp/ansible.kczt6srt state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:45:04 compute-1 sudo[68909]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:04 compute-1 sudo[69061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpwugqdoduophzdsfpyylqgfxgbkhpwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841904.5710726-139-190928032553829/AnsiballZ_command.py'
Jan 31 06:45:04 compute-1 sudo[69061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:05 compute-1 python3.9[69063]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.kczt6srt' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:45:05 compute-1 sudo[69061]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:05 compute-1 sudo[69215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugmbwybtvsanygnuobseonsribsjxpvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841905.451241-163-170462249862966/AnsiballZ_file.py'
Jan 31 06:45:05 compute-1 sudo[69215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:06 compute-1 python3.9[69217]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.kczt6srt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:45:06 compute-1 sudo[69215]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:06 compute-1 sshd-session[68301]: Connection closed by 192.168.122.30 port 45586
Jan 31 06:45:06 compute-1 sshd-session[68298]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:45:06 compute-1 systemd[1]: session-16.scope: Deactivated successfully.
Jan 31 06:45:06 compute-1 systemd[1]: session-16.scope: Consumed 2.745s CPU time.
Jan 31 06:45:06 compute-1 systemd-logind[788]: Session 16 logged out. Waiting for processes to exit.
Jan 31 06:45:06 compute-1 systemd-logind[788]: Removed session 16.
Jan 31 06:45:12 compute-1 sshd-session[69242]: Accepted publickey for zuul from 192.168.122.30 port 36878 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:45:12 compute-1 systemd-logind[788]: New session 17 of user zuul.
Jan 31 06:45:12 compute-1 systemd[1]: Started Session 17 of User zuul.
Jan 31 06:45:12 compute-1 sshd-session[69242]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:45:13 compute-1 python3.9[69395]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:45:14 compute-1 sudo[69549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omzccskblmxujnqpwwkqcggelgotspcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841913.5148137-57-133248520613906/AnsiballZ_systemd.py'
Jan 31 06:45:14 compute-1 sudo[69549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:14 compute-1 python3.9[69551]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 06:45:14 compute-1 sudo[69549]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:14 compute-1 sudo[69703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hztfdynaktituqpafcykllsggxaveizs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841914.6309578-81-7883097910111/AnsiballZ_systemd.py'
Jan 31 06:45:14 compute-1 sudo[69703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:15 compute-1 python3.9[69705]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:45:15 compute-1 sudo[69703]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:15 compute-1 sudo[69856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqtwtrcmfuyhoszspbzgimnbsfuovwpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841915.539364-108-75085625898180/AnsiballZ_command.py'
Jan 31 06:45:15 compute-1 sudo[69856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:16 compute-1 python3.9[69858]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:45:16 compute-1 sudo[69856]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:16 compute-1 sudo[70009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fttcntfspjjwapoiwzkiakckkakmrofc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841916.406423-132-144614135360796/AnsiballZ_stat.py'
Jan 31 06:45:16 compute-1 sudo[70009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:16 compute-1 python3.9[70011]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:45:17 compute-1 sudo[70009]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:17 compute-1 sudo[70163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpskmyvivacftnzcgqdrpcgzjvwrathi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841917.2012732-156-155527604923894/AnsiballZ_command.py'
Jan 31 06:45:17 compute-1 sudo[70163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:17 compute-1 python3.9[70165]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:45:17 compute-1 sudo[70163]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:18 compute-1 sudo[70318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfrxyuphsuhrmfogvudfyvynqiuugppd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841917.863558-180-247974605558212/AnsiballZ_file.py'
Jan 31 06:45:18 compute-1 sudo[70318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:18 compute-1 python3.9[70320]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:45:18 compute-1 sudo[70318]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:19 compute-1 sshd-session[69245]: Connection closed by 192.168.122.30 port 36878
Jan 31 06:45:19 compute-1 sshd-session[69242]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:45:19 compute-1 systemd[1]: session-17.scope: Deactivated successfully.
Jan 31 06:45:19 compute-1 systemd[1]: session-17.scope: Consumed 3.521s CPU time.
Jan 31 06:45:19 compute-1 systemd-logind[788]: Session 17 logged out. Waiting for processes to exit.
Jan 31 06:45:19 compute-1 systemd-logind[788]: Removed session 17.
Jan 31 06:45:19 compute-1 sshd-session[70345]: Invalid user solv from 2.57.122.238 port 42914
Jan 31 06:45:19 compute-1 sshd-session[70345]: Connection closed by invalid user solv 2.57.122.238 port 42914 [preauth]
Jan 31 06:45:25 compute-1 sshd-session[70347]: Accepted publickey for zuul from 192.168.122.30 port 48950 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:45:25 compute-1 systemd-logind[788]: New session 18 of user zuul.
Jan 31 06:45:25 compute-1 systemd[1]: Started Session 18 of User zuul.
Jan 31 06:45:25 compute-1 sshd-session[70347]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:45:26 compute-1 python3.9[70500]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:45:27 compute-1 sudo[70654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxhczoksbnvbyykzhjnnkbigsmgebzox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841927.4835432-63-122461755407434/AnsiballZ_setup.py'
Jan 31 06:45:27 compute-1 sudo[70654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:28 compute-1 python3.9[70656]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:45:28 compute-1 sudo[70654]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:28 compute-1 sudo[70738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcioseuhbqslxviovgwobmhcuedqowxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769841927.4835432-63-122461755407434/AnsiballZ_dnf.py'
Jan 31 06:45:28 compute-1 sudo[70738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:28 compute-1 python3.9[70740]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 06:45:30 compute-1 sudo[70738]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:31 compute-1 python3.9[70891]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:45:33 compute-1 python3.9[71042]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 06:45:34 compute-1 python3.9[71192]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:45:34 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 06:45:34 compute-1 python3.9[71343]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:45:35 compute-1 sshd-session[70350]: Connection closed by 192.168.122.30 port 48950
Jan 31 06:45:35 compute-1 sshd-session[70347]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:45:35 compute-1 systemd[1]: session-18.scope: Deactivated successfully.
Jan 31 06:45:35 compute-1 systemd[1]: session-18.scope: Consumed 5.634s CPU time.
Jan 31 06:45:35 compute-1 systemd-logind[788]: Session 18 logged out. Waiting for processes to exit.
Jan 31 06:45:35 compute-1 systemd-logind[788]: Removed session 18.
Jan 31 06:45:43 compute-1 sshd-session[71368]: Accepted publickey for zuul from 38.102.83.142 port 39992 ssh2: RSA SHA256:aLVmJr8JWWHWS0llpJoB9Gsrlwh5Xj4FMq/6l3bp+3U
Jan 31 06:45:43 compute-1 systemd-logind[788]: New session 19 of user zuul.
Jan 31 06:45:43 compute-1 systemd[1]: Started Session 19 of User zuul.
Jan 31 06:45:43 compute-1 sshd-session[71368]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:45:43 compute-1 sudo[71444]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnckgfvcwihohyvpqqhbqoziynsrjrve ; /usr/bin/python3'
Jan 31 06:45:43 compute-1 sudo[71444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:44 compute-1 useradd[71448]: new group: name=ceph-admin, GID=42478
Jan 31 06:45:44 compute-1 useradd[71448]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Jan 31 06:45:45 compute-1 sudo[71444]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:45 compute-1 sudo[71530]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tndjvkvxvfmvbsahpszhbbifmhmoyepd ; /usr/bin/python3'
Jan 31 06:45:45 compute-1 sudo[71530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:45 compute-1 sudo[71530]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:46 compute-1 sudo[71603]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwdrhizthnjuvtsojcilqevmsmdnbpgx ; /usr/bin/python3'
Jan 31 06:45:46 compute-1 sudo[71603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:46 compute-1 sudo[71603]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:46 compute-1 sudo[71653]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svqnmynppofltsnjhkuyppnyvqhccnum ; /usr/bin/python3'
Jan 31 06:45:46 compute-1 sudo[71653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:46 compute-1 sudo[71653]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:46 compute-1 sudo[71679]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abswcanaxtlwolhxnkukcdbrqnttmizc ; /usr/bin/python3'
Jan 31 06:45:46 compute-1 sudo[71679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:47 compute-1 sudo[71679]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:47 compute-1 sudo[71705]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyiihopcrwosixlatyjgoduocbdnpniz ; /usr/bin/python3'
Jan 31 06:45:47 compute-1 sudo[71705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:47 compute-1 sudo[71705]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:47 compute-1 sudo[71731]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etvnyoaiisuazglqdclolrcmqjxystxj ; /usr/bin/python3'
Jan 31 06:45:47 compute-1 sudo[71731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:48 compute-1 sudo[71731]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:48 compute-1 sudo[71809]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtsqqvumkldmejphiwezeysjigkhdrxs ; /usr/bin/python3'
Jan 31 06:45:48 compute-1 sudo[71809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:48 compute-1 sudo[71809]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:48 compute-1 sudo[71882]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juxwfyxfctdbuhhnybbuugexfvwlbofz ; /usr/bin/python3'
Jan 31 06:45:48 compute-1 sudo[71882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:48 compute-1 sudo[71882]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:49 compute-1 sudo[71984]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vldeqoszgmnrwlayeaihqyqbzslrywxx ; /usr/bin/python3'
Jan 31 06:45:49 compute-1 sudo[71984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:49 compute-1 sudo[71984]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:49 compute-1 sudo[72057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xexnqzszgahmkkhttklokvxxwbaxypfm ; /usr/bin/python3'
Jan 31 06:45:49 compute-1 sudo[72057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:49 compute-1 sudo[72057]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:50 compute-1 sudo[72107]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiohxncbepvepiqymyrfwwglrucgisin ; /usr/bin/python3'
Jan 31 06:45:50 compute-1 sudo[72107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:50 compute-1 python3[72109]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:45:51 compute-1 sudo[72107]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:51 compute-1 sudo[72202]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrxiaqfaljjvwxzjdbtastlcowaybkqf ; /usr/bin/python3'
Jan 31 06:45:51 compute-1 sudo[72202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:51 compute-1 python3[72204]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 06:45:53 compute-1 sudo[72202]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:53 compute-1 sudo[72229]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yasxsrdsjlnkfdjyfifoaswcdypijomi ; /usr/bin/python3'
Jan 31 06:45:53 compute-1 sudo[72229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:53 compute-1 python3[72231]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 06:45:53 compute-1 sudo[72229]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:54 compute-1 sudo[72255]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhxaesuaegsgypvvuhomxnvsowipgcty ; /usr/bin/python3'
Jan 31 06:45:54 compute-1 sudo[72255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:54 compute-1 python3[72257]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:45:54 compute-1 kernel: loop: module loaded
Jan 31 06:45:54 compute-1 kernel: loop3: detected capacity change from 0 to 14680064
Jan 31 06:45:54 compute-1 sudo[72255]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:54 compute-1 sudo[72290]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgbtljtwaagrbkkpkshlxwoaydeizqxj ; /usr/bin/python3'
Jan 31 06:45:54 compute-1 sudo[72290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:54 compute-1 python3[72292]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:45:54 compute-1 lvm[72295]: PV /dev/loop3 not used.
Jan 31 06:45:54 compute-1 lvm[72297]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:45:54 compute-1 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 31 06:45:54 compute-1 lvm[72303]:   1 logical volume(s) in volume group "ceph_vg0" now active
Jan 31 06:45:54 compute-1 lvm[72307]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:45:54 compute-1 lvm[72307]: VG ceph_vg0 finished
Jan 31 06:45:54 compute-1 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 31 06:45:54 compute-1 sudo[72290]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:55 compute-1 sudo[72383]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuimricpodaepfkarvyibkhpjiiyppik ; /usr/bin/python3'
Jan 31 06:45:55 compute-1 sudo[72383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:55 compute-1 python3[72385]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:45:55 compute-1 sudo[72383]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:55 compute-1 sudo[72456]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vskxmeagwptxqiwgpzmknkmfpjksooii ; /usr/bin/python3'
Jan 31 06:45:55 compute-1 sudo[72456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:55 compute-1 python3[72458]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769841955.1422815-37046-148911774574926/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:45:55 compute-1 sudo[72456]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:56 compute-1 sudo[72506]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsfqugtncgfvucfgkcbztugrhkniygpa ; /usr/bin/python3'
Jan 31 06:45:56 compute-1 sudo[72506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:45:56 compute-1 chronyd[58586]: Selected source 216.197.156.83 (pool.ntp.org)
Jan 31 06:45:56 compute-1 python3[72508]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:45:56 compute-1 systemd[1]: Reloading.
Jan 31 06:45:56 compute-1 systemd-sysv-generator[72539]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:45:56 compute-1 systemd-rc-local-generator[72534]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:45:56 compute-1 systemd[1]: Starting Ceph OSD losetup...
Jan 31 06:45:56 compute-1 bash[72548]: /dev/loop3: [64513]:4329570 (/var/lib/ceph-osd-0.img)
Jan 31 06:45:56 compute-1 systemd[1]: Finished Ceph OSD losetup.
Jan 31 06:45:56 compute-1 lvm[72549]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:45:56 compute-1 lvm[72549]: VG ceph_vg0 finished
Jan 31 06:45:56 compute-1 sudo[72506]: pam_unix(sudo:session): session closed for user root
Jan 31 06:45:59 compute-1 python3[72573]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:46:37 compute-1 sshd-session[72618]: Received disconnect from 45.148.10.151 port 15218:11:  [preauth]
Jan 31 06:46:37 compute-1 sshd-session[72618]: Disconnected from authenticating user root 45.148.10.151 port 15218 [preauth]
Jan 31 06:48:22 compute-1 sshd-session[72621]: Accepted publickey for ceph-admin from 192.168.122.100 port 35806 ssh2: RSA SHA256:mwWpeM7tAjqTEhDYBfZ1xy23Ku33+7aNpzjVWfS+FB8
Jan 31 06:48:22 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Jan 31 06:48:22 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 31 06:48:22 compute-1 systemd-logind[788]: New session 20 of user ceph-admin.
Jan 31 06:48:22 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 31 06:48:22 compute-1 systemd[1]: Starting User Manager for UID 42477...
Jan 31 06:48:22 compute-1 systemd[72625]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 06:48:22 compute-1 systemd[72625]: Queued start job for default target Main User Target.
Jan 31 06:48:22 compute-1 systemd[72625]: Created slice User Application Slice.
Jan 31 06:48:22 compute-1 systemd[72625]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 06:48:22 compute-1 systemd[72625]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 06:48:22 compute-1 systemd[72625]: Reached target Paths.
Jan 31 06:48:22 compute-1 systemd[72625]: Reached target Timers.
Jan 31 06:48:22 compute-1 systemd[72625]: Starting D-Bus User Message Bus Socket...
Jan 31 06:48:22 compute-1 systemd[72625]: Starting Create User's Volatile Files and Directories...
Jan 31 06:48:22 compute-1 systemd[72625]: Listening on D-Bus User Message Bus Socket.
Jan 31 06:48:22 compute-1 systemd[72625]: Finished Create User's Volatile Files and Directories.
Jan 31 06:48:22 compute-1 systemd[72625]: Reached target Sockets.
Jan 31 06:48:22 compute-1 systemd[72625]: Reached target Basic System.
Jan 31 06:48:22 compute-1 systemd[72625]: Reached target Main User Target.
Jan 31 06:48:22 compute-1 systemd[72625]: Startup finished in 113ms.
Jan 31 06:48:22 compute-1 systemd[1]: Started User Manager for UID 42477.
Jan 31 06:48:22 compute-1 sshd-session[72639]: Accepted publickey for ceph-admin from 192.168.122.100 port 35816 ssh2: RSA SHA256:mwWpeM7tAjqTEhDYBfZ1xy23Ku33+7aNpzjVWfS+FB8
Jan 31 06:48:22 compute-1 systemd[1]: Started Session 20 of User ceph-admin.
Jan 31 06:48:22 compute-1 sshd-session[72621]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 06:48:22 compute-1 systemd-logind[788]: New session 22 of user ceph-admin.
Jan 31 06:48:22 compute-1 systemd[1]: Started Session 22 of User ceph-admin.
Jan 31 06:48:22 compute-1 sshd-session[72639]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 06:48:22 compute-1 sudo[72646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:22 compute-1 sudo[72646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:22 compute-1 sudo[72646]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:22 compute-1 sudo[72671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:48:22 compute-1 sudo[72671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:22 compute-1 sudo[72671]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:22 compute-1 sshd-session[72696]: Accepted publickey for ceph-admin from 192.168.122.100 port 35818 ssh2: RSA SHA256:mwWpeM7tAjqTEhDYBfZ1xy23Ku33+7aNpzjVWfS+FB8
Jan 31 06:48:22 compute-1 systemd-logind[788]: New session 23 of user ceph-admin.
Jan 31 06:48:23 compute-1 systemd[1]: Started Session 23 of User ceph-admin.
Jan 31 06:48:23 compute-1 sshd-session[72696]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 06:48:23 compute-1 sudo[72700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:23 compute-1 sudo[72700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:23 compute-1 sudo[72700]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:23 compute-1 sudo[72725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-1
Jan 31 06:48:23 compute-1 sudo[72725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:23 compute-1 sudo[72725]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:23 compute-1 sshd-session[72750]: Accepted publickey for ceph-admin from 192.168.122.100 port 35834 ssh2: RSA SHA256:mwWpeM7tAjqTEhDYBfZ1xy23Ku33+7aNpzjVWfS+FB8
Jan 31 06:48:23 compute-1 systemd-logind[788]: New session 24 of user ceph-admin.
Jan 31 06:48:23 compute-1 systemd[1]: Started Session 24 of User ceph-admin.
Jan 31 06:48:23 compute-1 sshd-session[72750]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 06:48:23 compute-1 sudo[72754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:23 compute-1 sudo[72754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:23 compute-1 sudo[72754]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:23 compute-1 sudo[72779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Jan 31 06:48:23 compute-1 sudo[72779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:23 compute-1 sudo[72779]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:23 compute-1 sshd-session[72804]: Accepted publickey for ceph-admin from 192.168.122.100 port 35836 ssh2: RSA SHA256:mwWpeM7tAjqTEhDYBfZ1xy23Ku33+7aNpzjVWfS+FB8
Jan 31 06:48:23 compute-1 systemd-logind[788]: New session 25 of user ceph-admin.
Jan 31 06:48:23 compute-1 systemd[1]: Started Session 25 of User ceph-admin.
Jan 31 06:48:23 compute-1 sshd-session[72804]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 06:48:23 compute-1 sudo[72808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:23 compute-1 sudo[72808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:23 compute-1 sudo[72808]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:23 compute-1 sudo[72833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 06:48:23 compute-1 sudo[72833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:23 compute-1 sudo[72833]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:24 compute-1 sshd-session[72858]: Accepted publickey for ceph-admin from 192.168.122.100 port 35852 ssh2: RSA SHA256:mwWpeM7tAjqTEhDYBfZ1xy23Ku33+7aNpzjVWfS+FB8
Jan 31 06:48:24 compute-1 systemd-logind[788]: New session 26 of user ceph-admin.
Jan 31 06:48:24 compute-1 systemd[1]: Started Session 26 of User ceph-admin.
Jan 31 06:48:24 compute-1 sshd-session[72858]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 06:48:24 compute-1 sudo[72862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:24 compute-1 sudo[72862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:24 compute-1 sudo[72862]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:24 compute-1 sudo[72887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 06:48:24 compute-1 sudo[72887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:24 compute-1 sudo[72887]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:24 compute-1 sshd-session[72912]: Accepted publickey for ceph-admin from 192.168.122.100 port 35866 ssh2: RSA SHA256:mwWpeM7tAjqTEhDYBfZ1xy23Ku33+7aNpzjVWfS+FB8
Jan 31 06:48:24 compute-1 systemd-logind[788]: New session 27 of user ceph-admin.
Jan 31 06:48:24 compute-1 systemd[1]: Started Session 27 of User ceph-admin.
Jan 31 06:48:24 compute-1 sshd-session[72912]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 06:48:24 compute-1 sudo[72916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:24 compute-1 sudo[72916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:24 compute-1 sudo[72916]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:24 compute-1 sudo[72941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Jan 31 06:48:24 compute-1 sudo[72941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:24 compute-1 sudo[72941]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:24 compute-1 sshd-session[72966]: Accepted publickey for ceph-admin from 192.168.122.100 port 35878 ssh2: RSA SHA256:mwWpeM7tAjqTEhDYBfZ1xy23Ku33+7aNpzjVWfS+FB8
Jan 31 06:48:24 compute-1 systemd-logind[788]: New session 28 of user ceph-admin.
Jan 31 06:48:24 compute-1 systemd[1]: Started Session 28 of User ceph-admin.
Jan 31 06:48:24 compute-1 sshd-session[72966]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 06:48:24 compute-1 sudo[72970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:24 compute-1 sudo[72970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:24 compute-1 sudo[72970]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:24 compute-1 sudo[72995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 06:48:24 compute-1 sudo[72995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:24 compute-1 sudo[72995]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:25 compute-1 sshd-session[73020]: Accepted publickey for ceph-admin from 192.168.122.100 port 35894 ssh2: RSA SHA256:mwWpeM7tAjqTEhDYBfZ1xy23Ku33+7aNpzjVWfS+FB8
Jan 31 06:48:25 compute-1 systemd-logind[788]: New session 29 of user ceph-admin.
Jan 31 06:48:25 compute-1 systemd[1]: Started Session 29 of User ceph-admin.
Jan 31 06:48:25 compute-1 sshd-session[73020]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 06:48:25 compute-1 sudo[73024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:25 compute-1 sudo[73024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:25 compute-1 sudo[73024]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:25 compute-1 sudo[73049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Jan 31 06:48:25 compute-1 sudo[73049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:25 compute-1 sudo[73049]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:25 compute-1 sshd-session[73074]: Accepted publickey for ceph-admin from 192.168.122.100 port 35896 ssh2: RSA SHA256:mwWpeM7tAjqTEhDYBfZ1xy23Ku33+7aNpzjVWfS+FB8
Jan 31 06:48:25 compute-1 systemd-logind[788]: New session 30 of user ceph-admin.
Jan 31 06:48:25 compute-1 systemd[1]: Started Session 30 of User ceph-admin.
Jan 31 06:48:25 compute-1 sshd-session[73074]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 06:48:25 compute-1 sshd-session[73101]: Accepted publickey for ceph-admin from 192.168.122.100 port 35910 ssh2: RSA SHA256:mwWpeM7tAjqTEhDYBfZ1xy23Ku33+7aNpzjVWfS+FB8
Jan 31 06:48:25 compute-1 systemd-logind[788]: New session 31 of user ceph-admin.
Jan 31 06:48:25 compute-1 systemd[1]: Started Session 31 of User ceph-admin.
Jan 31 06:48:25 compute-1 sshd-session[73101]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 06:48:26 compute-1 sudo[73105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:26 compute-1 sudo[73105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:26 compute-1 sudo[73105]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:26 compute-1 sudo[73130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Jan 31 06:48:26 compute-1 sudo[73130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:26 compute-1 sudo[73130]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:26 compute-1 sshd-session[73155]: Accepted publickey for ceph-admin from 192.168.122.100 port 35914 ssh2: RSA SHA256:mwWpeM7tAjqTEhDYBfZ1xy23Ku33+7aNpzjVWfS+FB8
Jan 31 06:48:26 compute-1 systemd-logind[788]: New session 32 of user ceph-admin.
Jan 31 06:48:26 compute-1 systemd[1]: Started Session 32 of User ceph-admin.
Jan 31 06:48:26 compute-1 sshd-session[73155]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 06:48:26 compute-1 sudo[73159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:26 compute-1 sudo[73159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:26 compute-1 sudo[73159]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:26 compute-1 sudo[73184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-1
Jan 31 06:48:26 compute-1 sudo[73184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:26 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 06:48:26 compute-1 sudo[73184]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:26 compute-1 sudo[73227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:26 compute-1 sudo[73227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:26 compute-1 sudo[73227]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:26 compute-1 sudo[73252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:48:26 compute-1 sudo[73252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:26 compute-1 sudo[73252]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:26 compute-1 sudo[73277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:26 compute-1 sudo[73277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:26 compute-1 sudo[73277]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:26 compute-1 sudo[73302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 06:48:26 compute-1 sudo[73302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:26 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 06:48:27 compute-1 sudo[73302]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:27 compute-1 sudo[73348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:27 compute-1 sudo[73348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:27 compute-1 sudo[73348]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:27 compute-1 sudo[73373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:48:27 compute-1 sudo[73373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:27 compute-1 sudo[73373]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:27 compute-1 sudo[73398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:27 compute-1 sudo[73398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:27 compute-1 sudo[73398]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:27 compute-1 sudo[73423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 06:48:27 compute-1 sudo[73423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:27 compute-1 sudo[73423]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:27 compute-1 sudo[73484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:27 compute-1 sudo[73484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:27 compute-1 sudo[73484]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:27 compute-1 sudo[73509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:48:27 compute-1 sudo[73509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:27 compute-1 sudo[73509]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:27 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 06:48:27 compute-1 sudo[73534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:27 compute-1 sudo[73534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:27 compute-1 sudo[73534]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:27 compute-1 sudo[73559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 06:48:27 compute-1 sudo[73559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:27 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 06:48:27 compute-1 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73596 (sysctl)
Jan 31 06:48:27 compute-1 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 31 06:48:27 compute-1 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 31 06:48:27 compute-1 sudo[73559]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:28 compute-1 sudo[73618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:28 compute-1 sudo[73618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:28 compute-1 sudo[73618]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:28 compute-1 sudo[73643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:48:28 compute-1 sudo[73643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:28 compute-1 sudo[73643]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:28 compute-1 sudo[73668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:28 compute-1 sudo[73668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:28 compute-1 sudo[73668]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:28 compute-1 sudo[73693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 31 06:48:28 compute-1 sudo[73693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:28 compute-1 sudo[73693]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:28 compute-1 sudo[73736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:28 compute-1 sudo[73736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:28 compute-1 sudo[73736]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:28 compute-1 sudo[73761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:48:28 compute-1 sudo[73761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:28 compute-1 sudo[73761]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:28 compute-1 sudo[73786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:28 compute-1 sudo[73786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:28 compute-1 sudo[73786]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:28 compute-1 sudo[73811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -- inventory --format=json-pretty --filter-for-batch
Jan 31 06:48:28 compute-1 sudo[73811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:28 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 06:48:28 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 06:48:28 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 06:48:31 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat2911120850-lower\x2dmapped.mount: Deactivated successfully.
Jan 31 06:48:36 compute-1 sshd-session[73916]: Invalid user solv from 2.57.122.238 port 49936
Jan 31 06:48:36 compute-1 sshd-session[73916]: Connection closed by invalid user solv 2.57.122.238 port 49936 [preauth]
Jan 31 06:48:48 compute-1 podman[73873]: 2026-01-31 06:48:48.428916163 +0000 UTC m=+19.687288667 container create b1720c25a191cc9a3b14b8546bf54b6399c930c5b3fe6ed257baf461fda8b48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 06:48:48 compute-1 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 31 06:48:48 compute-1 systemd[1]: Started libpod-conmon-b1720c25a191cc9a3b14b8546bf54b6399c930c5b3fe6ed257baf461fda8b48e.scope.
Jan 31 06:48:48 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:48:48 compute-1 podman[73873]: 2026-01-31 06:48:48.509284195 +0000 UTC m=+19.767656699 container init b1720c25a191cc9a3b14b8546bf54b6399c930c5b3fe6ed257baf461fda8b48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_agnesi, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:48:48 compute-1 podman[73873]: 2026-01-31 06:48:48.515171456 +0000 UTC m=+19.773543960 container start b1720c25a191cc9a3b14b8546bf54b6399c930c5b3fe6ed257baf461fda8b48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_agnesi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:48:48 compute-1 podman[73873]: 2026-01-31 06:48:48.417132582 +0000 UTC m=+19.675505086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:48:48 compute-1 podman[73873]: 2026-01-31 06:48:48.52003854 +0000 UTC m=+19.778411044 container attach b1720c25a191cc9a3b14b8546bf54b6399c930c5b3fe6ed257baf461fda8b48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:48:48 compute-1 nostalgic_agnesi[73936]: 167 167
Jan 31 06:48:48 compute-1 systemd[1]: libpod-b1720c25a191cc9a3b14b8546bf54b6399c930c5b3fe6ed257baf461fda8b48e.scope: Deactivated successfully.
Jan 31 06:48:48 compute-1 podman[73873]: 2026-01-31 06:48:48.521908698 +0000 UTC m=+19.780281232 container died b1720c25a191cc9a3b14b8546bf54b6399c930c5b3fe6ed257baf461fda8b48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_agnesi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 06:48:48 compute-1 systemd[1]: var-lib-containers-storage-overlay-f5ce4cdef70b103d2e50fa19f8f9a836652f72f03e1de86e1c067a02ab8d12d0-merged.mount: Deactivated successfully.
Jan 31 06:48:48 compute-1 podman[73873]: 2026-01-31 06:48:48.555829154 +0000 UTC m=+19.814201678 container remove b1720c25a191cc9a3b14b8546bf54b6399c930c5b3fe6ed257baf461fda8b48e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_agnesi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:48:48 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 06:48:48 compute-1 systemd[1]: libpod-conmon-b1720c25a191cc9a3b14b8546bf54b6399c930c5b3fe6ed257baf461fda8b48e.scope: Deactivated successfully.
Jan 31 06:48:48 compute-1 podman[73961]: 2026-01-31 06:48:48.667030383 +0000 UTC m=+0.037335114 container create be5225fea8f2cb4c7d57b47028741a75708a67acb9fde114cabd04d416e90b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_euclid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:48:48 compute-1 systemd[1]: Started libpod-conmon-be5225fea8f2cb4c7d57b47028741a75708a67acb9fde114cabd04d416e90b1e.scope.
Jan 31 06:48:48 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:48:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b96451fc26d6d1c286c5166ad73046f850dca7f9461cf22cb0265161632bad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:48:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b96451fc26d6d1c286c5166ad73046f850dca7f9461cf22cb0265161632bad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:48:48 compute-1 podman[73961]: 2026-01-31 06:48:48.730037372 +0000 UTC m=+0.100342133 container init be5225fea8f2cb4c7d57b47028741a75708a67acb9fde114cabd04d416e90b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 06:48:48 compute-1 podman[73961]: 2026-01-31 06:48:48.7385965 +0000 UTC m=+0.108901231 container start be5225fea8f2cb4c7d57b47028741a75708a67acb9fde114cabd04d416e90b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:48:48 compute-1 podman[73961]: 2026-01-31 06:48:48.742247734 +0000 UTC m=+0.112552485 container attach be5225fea8f2cb4c7d57b47028741a75708a67acb9fde114cabd04d416e90b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 06:48:48 compute-1 podman[73961]: 2026-01-31 06:48:48.650385368 +0000 UTC m=+0.020690129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:48:49 compute-1 recursing_euclid[73978]: [
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:     {
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:         "available": false,
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:         "ceph_device": false,
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:         "lsm_data": {},
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:         "lvs": [],
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:         "path": "/dev/sr0",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:         "rejected_reasons": [
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "Has a FileSystem",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "Insufficient space (<5GB)"
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:         ],
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:         "sys_api": {
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "actuators": null,
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "device_nodes": "sr0",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "devname": "sr0",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "human_readable_size": "482.00 KB",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "id_bus": "ata",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "model": "QEMU DVD-ROM",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "nr_requests": "2",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "parent": "/dev/sr0",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "partitions": {},
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "path": "/dev/sr0",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "removable": "1",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "rev": "2.5+",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "ro": "0",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "rotational": "1",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "sas_address": "",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "sas_device_handle": "",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "scheduler_mode": "mq-deadline",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "sectors": 0,
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "sectorsize": "2048",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "size": 493568.0,
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "support_discard": "2048",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "type": "disk",
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:             "vendor": "QEMU"
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:         }
Jan 31 06:48:49 compute-1 recursing_euclid[73978]:     }
Jan 31 06:48:49 compute-1 recursing_euclid[73978]: ]
Jan 31 06:48:49 compute-1 systemd[1]: libpod-be5225fea8f2cb4c7d57b47028741a75708a67acb9fde114cabd04d416e90b1e.scope: Deactivated successfully.
Jan 31 06:48:49 compute-1 systemd[1]: libpod-be5225fea8f2cb4c7d57b47028741a75708a67acb9fde114cabd04d416e90b1e.scope: Consumed 1.101s CPU time.
Jan 31 06:48:49 compute-1 podman[73961]: 2026-01-31 06:48:49.845443282 +0000 UTC m=+1.215748013 container died be5225fea8f2cb4c7d57b47028741a75708a67acb9fde114cabd04d416e90b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:48:49 compute-1 systemd[1]: var-lib-containers-storage-overlay-a6b96451fc26d6d1c286c5166ad73046f850dca7f9461cf22cb0265161632bad-merged.mount: Deactivated successfully.
Jan 31 06:48:49 compute-1 podman[73961]: 2026-01-31 06:48:49.904782967 +0000 UTC m=+1.275087698 container remove be5225fea8f2cb4c7d57b47028741a75708a67acb9fde114cabd04d416e90b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_euclid, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:48:49 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 06:48:49 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 06:48:49 compute-1 systemd[1]: libpod-conmon-be5225fea8f2cb4c7d57b47028741a75708a67acb9fde114cabd04d416e90b1e.scope: Deactivated successfully.
Jan 31 06:48:49 compute-1 sudo[73811]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:50 compute-1 sudo[75058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:50 compute-1 sudo[75058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:50 compute-1 sudo[75058]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:50 compute-1 sudo[75083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 31 06:48:50 compute-1 sudo[75083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:50 compute-1 sudo[75083]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:50 compute-1 sudo[75108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:50 compute-1 sudo[75108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:50 compute-1 sudo[75108]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:50 compute-1 sudo[75133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/etc/ceph
Jan 31 06:48:50 compute-1 sudo[75133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:50 compute-1 sudo[75133]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:50 compute-1 sudo[75158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:50 compute-1 sudo[75158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:50 compute-1 sudo[75158]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:50 compute-1 sudo[75183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/etc/ceph/ceph.conf.new
Jan 31 06:48:50 compute-1 sudo[75183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:50 compute-1 sudo[75183]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:50 compute-1 sudo[75208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:50 compute-1 sudo[75208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:50 compute-1 sudo[75208]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:50 compute-1 sudo[75233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 06:48:50 compute-1 sudo[75233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:50 compute-1 sudo[75233]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:50 compute-1 sudo[75258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:50 compute-1 sudo[75258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:50 compute-1 sudo[75258]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:50 compute-1 sudo[75283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/etc/ceph/ceph.conf.new
Jan 31 06:48:50 compute-1 sudo[75283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:50 compute-1 sudo[75283]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:50 compute-1 sudo[75331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:50 compute-1 sudo[75331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:50 compute-1 sudo[75331]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:50 compute-1 sudo[75356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/etc/ceph/ceph.conf.new
Jan 31 06:48:50 compute-1 sudo[75356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:50 compute-1 sudo[75356]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:50 compute-1 sudo[75381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:50 compute-1 sudo[75381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:50 compute-1 sudo[75381]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:50 compute-1 sudo[75406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/etc/ceph/ceph.conf.new
Jan 31 06:48:50 compute-1 sudo[75406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:50 compute-1 sudo[75406]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:50 compute-1 sudo[75431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:50 compute-1 sudo[75431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:50 compute-1 sudo[75431]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:50 compute-1 sudo[75456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 31 06:48:50 compute-1 sudo[75456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:50 compute-1 sudo[75456]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:50 compute-1 sudo[75481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:50 compute-1 sudo[75481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:50 compute-1 sudo[75481]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:50 compute-1 sudo[75506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config
Jan 31 06:48:50 compute-1 sudo[75506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:50 compute-1 sudo[75506]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:50 compute-1 sudo[75531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:50 compute-1 sudo[75531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:50 compute-1 sudo[75531]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:50 compute-1 sudo[75556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config
Jan 31 06:48:50 compute-1 sudo[75556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:51 compute-1 sudo[75556]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:51 compute-1 sudo[75581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:51 compute-1 sudo[75581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:51 compute-1 sudo[75581]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:51 compute-1 sudo[75606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf.new
Jan 31 06:48:51 compute-1 sudo[75606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:51 compute-1 sudo[75606]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:51 compute-1 sudo[75631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:51 compute-1 sudo[75631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:51 compute-1 sudo[75631]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:51 compute-1 sudo[75656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 06:48:51 compute-1 sudo[75656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:51 compute-1 sudo[75656]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:51 compute-1 sudo[75681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:51 compute-1 sudo[75681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:51 compute-1 sudo[75681]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:51 compute-1 sudo[75706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf.new
Jan 31 06:48:51 compute-1 sudo[75706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:51 compute-1 sudo[75706]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:51 compute-1 sudo[75754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:51 compute-1 sudo[75754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:51 compute-1 sudo[75754]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:51 compute-1 sudo[75779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf.new
Jan 31 06:48:51 compute-1 sudo[75779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:51 compute-1 sudo[75779]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:51 compute-1 sudo[75804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:51 compute-1 sudo[75804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:51 compute-1 sudo[75804]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:51 compute-1 sudo[75829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf.new
Jan 31 06:48:51 compute-1 sudo[75829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:51 compute-1 sudo[75829]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:51 compute-1 sudo[75854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:51 compute-1 sudo[75854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:51 compute-1 sudo[75854]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:51 compute-1 sudo[75879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf.new /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 06:48:51 compute-1 sudo[75879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:51 compute-1 sudo[75879]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:51 compute-1 sudo[75904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:51 compute-1 sudo[75904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:51 compute-1 sudo[75904]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:51 compute-1 sudo[75929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 31 06:48:51 compute-1 sudo[75929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:51 compute-1 sudo[75929]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:51 compute-1 sudo[75954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:51 compute-1 sudo[75954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:51 compute-1 sudo[75954]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:51 compute-1 sudo[75979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/etc/ceph
Jan 31 06:48:51 compute-1 sudo[75979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:51 compute-1 sudo[75979]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:51 compute-1 sudo[76004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:51 compute-1 sudo[76004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:51 compute-1 sudo[76004]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:51 compute-1 sudo[76029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/etc/ceph/ceph.client.admin.keyring.new
Jan 31 06:48:51 compute-1 sudo[76029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:51 compute-1 sudo[76029]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:51 compute-1 sudo[76054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:51 compute-1 sudo[76054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:51 compute-1 sudo[76054]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:51 compute-1 sudo[76079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 06:48:51 compute-1 sudo[76079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:52 compute-1 sudo[76079]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:52 compute-1 sudo[76104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:52 compute-1 sudo[76104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:52 compute-1 sudo[76104]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:52 compute-1 sudo[76129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/etc/ceph/ceph.client.admin.keyring.new
Jan 31 06:48:52 compute-1 sudo[76129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:52 compute-1 sudo[76129]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:52 compute-1 sudo[76177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:52 compute-1 sudo[76177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:52 compute-1 sudo[76177]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:52 compute-1 sudo[76202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/etc/ceph/ceph.client.admin.keyring.new
Jan 31 06:48:52 compute-1 sudo[76202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:52 compute-1 sudo[76202]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:52 compute-1 sudo[76227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:52 compute-1 sudo[76227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:52 compute-1 sudo[76227]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:52 compute-1 sudo[76252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/etc/ceph/ceph.client.admin.keyring.new
Jan 31 06:48:52 compute-1 sudo[76252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:52 compute-1 sudo[76252]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:52 compute-1 sudo[76277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:52 compute-1 sudo[76277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:52 compute-1 sudo[76277]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:52 compute-1 sudo[76302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 31 06:48:52 compute-1 sudo[76302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:52 compute-1 sudo[76302]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:52 compute-1 sudo[76327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:52 compute-1 sudo[76327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:52 compute-1 sudo[76327]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:52 compute-1 sudo[76352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config
Jan 31 06:48:52 compute-1 sudo[76352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:52 compute-1 sudo[76352]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:52 compute-1 sudo[76377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:52 compute-1 sudo[76377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:52 compute-1 sudo[76377]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:52 compute-1 sudo[76402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config
Jan 31 06:48:52 compute-1 sudo[76402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:52 compute-1 sudo[76402]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:52 compute-1 sudo[76427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:52 compute-1 sudo[76427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:52 compute-1 sudo[76427]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:52 compute-1 sudo[76452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.client.admin.keyring.new
Jan 31 06:48:52 compute-1 sudo[76452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:52 compute-1 sudo[76452]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:52 compute-1 sudo[76477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:52 compute-1 sudo[76477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:52 compute-1 sudo[76477]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:52 compute-1 sudo[76502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 06:48:52 compute-1 sudo[76502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:52 compute-1 sudo[76502]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:52 compute-1 sudo[76527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:52 compute-1 sudo[76527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:52 compute-1 sudo[76527]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:52 compute-1 sudo[76552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.client.admin.keyring.new
Jan 31 06:48:52 compute-1 sudo[76552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:52 compute-1 sudo[76552]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:53 compute-1 sudo[76600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:53 compute-1 sudo[76600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:53 compute-1 sudo[76600]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:53 compute-1 sudo[76625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.client.admin.keyring.new
Jan 31 06:48:53 compute-1 sudo[76625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:53 compute-1 sudo[76625]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:53 compute-1 sudo[76650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:53 compute-1 sudo[76650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:53 compute-1 sudo[76650]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:53 compute-1 sudo[76675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.client.admin.keyring.new
Jan 31 06:48:53 compute-1 sudo[76675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:53 compute-1 sudo[76675]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:53 compute-1 sudo[76700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:53 compute-1 sudo[76700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:53 compute-1 sudo[76700]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:53 compute-1 sudo[76725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.client.admin.keyring.new /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.client.admin.keyring
Jan 31 06:48:53 compute-1 sudo[76725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:53 compute-1 sudo[76725]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:53 compute-1 sudo[76750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:53 compute-1 sudo[76750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:53 compute-1 sudo[76750]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:53 compute-1 sudo[76775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:48:53 compute-1 sudo[76775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:53 compute-1 sudo[76775]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:53 compute-1 sudo[76800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:53 compute-1 sudo[76800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:53 compute-1 sudo[76800]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:53 compute-1 sudo[76825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 06:48:53 compute-1 sudo[76825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:53 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 06:48:53 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 06:48:53 compute-1 podman[76890]: 2026-01-31 06:48:53.824748494 +0000 UTC m=+0.036603925 container create 445d2a0a7a5cad86af8fcfd9a17c0fcd820df0c3ca2d21118f985c92edadcf22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 06:48:53 compute-1 systemd[1]: Started libpod-conmon-445d2a0a7a5cad86af8fcfd9a17c0fcd820df0c3ca2d21118f985c92edadcf22.scope.
Jan 31 06:48:53 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:48:53 compute-1 podman[76890]: 2026-01-31 06:48:53.808595312 +0000 UTC m=+0.020450563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:48:53 compute-1 podman[76890]: 2026-01-31 06:48:53.914378283 +0000 UTC m=+0.126233524 container init 445d2a0a7a5cad86af8fcfd9a17c0fcd820df0c3ca2d21118f985c92edadcf22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_stonebraker, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:48:53 compute-1 podman[76890]: 2026-01-31 06:48:53.919211076 +0000 UTC m=+0.131066297 container start 445d2a0a7a5cad86af8fcfd9a17c0fcd820df0c3ca2d21118f985c92edadcf22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_stonebraker, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 06:48:53 compute-1 podman[76890]: 2026-01-31 06:48:53.922948252 +0000 UTC m=+0.134803613 container attach 445d2a0a7a5cad86af8fcfd9a17c0fcd820df0c3ca2d21118f985c92edadcf22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_stonebraker, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:48:53 compute-1 objective_stonebraker[76906]: 167 167
Jan 31 06:48:53 compute-1 systemd[1]: libpod-445d2a0a7a5cad86af8fcfd9a17c0fcd820df0c3ca2d21118f985c92edadcf22.scope: Deactivated successfully.
Jan 31 06:48:53 compute-1 podman[76890]: 2026-01-31 06:48:53.924738637 +0000 UTC m=+0.136593858 container died 445d2a0a7a5cad86af8fcfd9a17c0fcd820df0c3ca2d21118f985c92edadcf22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_stonebraker, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 06:48:53 compute-1 podman[76890]: 2026-01-31 06:48:53.966584826 +0000 UTC m=+0.178440047 container remove 445d2a0a7a5cad86af8fcfd9a17c0fcd820df0c3ca2d21118f985c92edadcf22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:48:53 compute-1 systemd[1]: libpod-conmon-445d2a0a7a5cad86af8fcfd9a17c0fcd820df0c3ca2d21118f985c92edadcf22.scope: Deactivated successfully.
Jan 31 06:48:54 compute-1 systemd[1]: Reloading.
Jan 31 06:48:54 compute-1 systemd-rc-local-generator[76954]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:48:54 compute-1 systemd-sysv-generator[76957]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:48:54 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 06:48:54 compute-1 systemd[1]: Reloading.
Jan 31 06:48:54 compute-1 systemd-rc-local-generator[76986]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:48:54 compute-1 systemd-sysv-generator[76992]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:48:54 compute-1 systemd[1]: Reached target All Ceph clusters and services.
Jan 31 06:48:54 compute-1 systemd[1]: Reloading.
Jan 31 06:48:54 compute-1 systemd-sysv-generator[77031]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:48:54 compute-1 systemd-rc-local-generator[77027]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:48:54 compute-1 systemd[1]: Reached target Ceph cluster ef73c6e0-6d85-55c2-9347-1f544d3e3d3a.
Jan 31 06:48:54 compute-1 systemd[1]: Reloading.
Jan 31 06:48:54 compute-1 systemd-rc-local-generator[77066]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:48:54 compute-1 systemd-sysv-generator[77070]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:48:54 compute-1 systemd[1]: Reloading.
Jan 31 06:48:54 compute-1 systemd-rc-local-generator[77108]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:48:54 compute-1 systemd-sysv-generator[77111]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:48:55 compute-1 systemd[1]: Created slice Slice /system/ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a.
Jan 31 06:48:55 compute-1 systemd[1]: Reached target System Time Set.
Jan 31 06:48:55 compute-1 systemd[1]: Reached target System Time Synchronized.
Jan 31 06:48:55 compute-1 systemd[1]: Starting Ceph crash.compute-1 for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a...
Jan 31 06:48:55 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 06:48:55 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 06:48:55 compute-1 podman[77164]: 2026-01-31 06:48:55.289467502 +0000 UTC m=+0.038986965 container create 6672f2dbf6180479da9a6d4a3be2a0622c308e279fdc39713e442740f41af4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:48:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9519aa234583c5b1c2cea9e276f3a2c86f005d717664b61ac506877d61b3b4b/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:48:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9519aa234583c5b1c2cea9e276f3a2c86f005d717664b61ac506877d61b3b4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:48:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9519aa234583c5b1c2cea9e276f3a2c86f005d717664b61ac506877d61b3b4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:48:55 compute-1 podman[77164]: 2026-01-31 06:48:55.271758581 +0000 UTC m=+0.021278084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:48:55 compute-1 podman[77164]: 2026-01-31 06:48:55.3755744 +0000 UTC m=+0.125093883 container init 6672f2dbf6180479da9a6d4a3be2a0622c308e279fdc39713e442740f41af4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:48:55 compute-1 podman[77164]: 2026-01-31 06:48:55.379167262 +0000 UTC m=+0.128686715 container start 6672f2dbf6180479da9a6d4a3be2a0622c308e279fdc39713e442740f41af4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 06:48:55 compute-1 bash[77164]: 6672f2dbf6180479da9a6d4a3be2a0622c308e279fdc39713e442740f41af4cd
Jan 31 06:48:55 compute-1 systemd[1]: Started Ceph crash.compute-1 for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a.
Jan 31 06:48:55 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1[77180]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 31 06:48:55 compute-1 sudo[76825]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:55 compute-1 sudo[77187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:55 compute-1 sudo[77187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:55 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1[77180]: 2026-01-31T06:48:55.746+0000 7fcb72bad640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 06:48:55 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1[77180]: 2026-01-31T06:48:55.746+0000 7fcb72bad640 -1 AuthRegistry(0x7fcb6c067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 06:48:55 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1[77180]: 2026-01-31T06:48:55.747+0000 7fcb72bad640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 06:48:55 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1[77180]: 2026-01-31T06:48:55.747+0000 7fcb72bad640 -1 AuthRegistry(0x7fcb72bac000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 06:48:55 compute-1 sudo[77187]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:55 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1[77180]: 2026-01-31T06:48:55.750+0000 7fcb70922640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 31 06:48:55 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1[77180]: 2026-01-31T06:48:55.750+0000 7fcb72bad640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 31 06:48:55 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1[77180]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 31 06:48:55 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1[77180]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 31 06:48:55 compute-1 sudo[77222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:48:55 compute-1 sudo[77222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:55 compute-1 sudo[77222]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:55 compute-1 sudo[77247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:48:55 compute-1 sudo[77247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:55 compute-1 sudo[77247]: pam_unix(sudo:session): session closed for user root
Jan 31 06:48:55 compute-1 sudo[77272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 06:48:55 compute-1 sudo[77272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:48:56 compute-1 podman[77338]: 2026-01-31 06:48:56.164312479 +0000 UTC m=+0.042737252 container create 730e6effd080fec147bef9af650dacd14a774833148e6a503295e2735572023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:48:56 compute-1 systemd[1]: Started libpod-conmon-730e6effd080fec147bef9af650dacd14a774833148e6a503295e2735572023d.scope.
Jan 31 06:48:56 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:48:56 compute-1 podman[77338]: 2026-01-31 06:48:56.143806706 +0000 UTC m=+0.022231499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:48:56 compute-1 podman[77338]: 2026-01-31 06:48:56.244381264 +0000 UTC m=+0.122806057 container init 730e6effd080fec147bef9af650dacd14a774833148e6a503295e2735572023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 06:48:56 compute-1 podman[77338]: 2026-01-31 06:48:56.251051584 +0000 UTC m=+0.129476357 container start 730e6effd080fec147bef9af650dacd14a774833148e6a503295e2735572023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 06:48:56 compute-1 podman[77338]: 2026-01-31 06:48:56.254277257 +0000 UTC m=+0.132702040 container attach 730e6effd080fec147bef9af650dacd14a774833148e6a503295e2735572023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Jan 31 06:48:56 compute-1 practical_bell[77354]: 167 167
Jan 31 06:48:56 compute-1 podman[77338]: 2026-01-31 06:48:56.255270892 +0000 UTC m=+0.133695665 container died 730e6effd080fec147bef9af650dacd14a774833148e6a503295e2735572023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 06:48:56 compute-1 systemd[1]: libpod-730e6effd080fec147bef9af650dacd14a774833148e6a503295e2735572023d.scope: Deactivated successfully.
Jan 31 06:48:56 compute-1 systemd[1]: var-lib-containers-storage-overlay-d7188121ede86f10281393725951ce7d58cc9693b8d7a5c2bc442c5f6471e87b-merged.mount: Deactivated successfully.
Jan 31 06:48:56 compute-1 podman[77338]: 2026-01-31 06:48:56.284455067 +0000 UTC m=+0.162879840 container remove 730e6effd080fec147bef9af650dacd14a774833148e6a503295e2735572023d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 06:48:56 compute-1 systemd[1]: libpod-conmon-730e6effd080fec147bef9af650dacd14a774833148e6a503295e2735572023d.scope: Deactivated successfully.
Jan 31 06:48:56 compute-1 podman[77377]: 2026-01-31 06:48:56.396550699 +0000 UTC m=+0.037133469 container create 287c940ddcd3c36cf4bcfe9e0adf6ab56dfcc0b6a239c3fbc1a3d101e251dbc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_turing, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:48:56 compute-1 systemd[1]: Started libpod-conmon-287c940ddcd3c36cf4bcfe9e0adf6ab56dfcc0b6a239c3fbc1a3d101e251dbc0.scope.
Jan 31 06:48:56 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:48:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a732f5f56904a7455680a764e863031652cb587fd8d8cdb0d9d953fda959c784/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:48:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a732f5f56904a7455680a764e863031652cb587fd8d8cdb0d9d953fda959c784/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:48:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a732f5f56904a7455680a764e863031652cb587fd8d8cdb0d9d953fda959c784/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:48:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a732f5f56904a7455680a764e863031652cb587fd8d8cdb0d9d953fda959c784/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:48:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a732f5f56904a7455680a764e863031652cb587fd8d8cdb0d9d953fda959c784/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:48:56 compute-1 podman[77377]: 2026-01-31 06:48:56.463084518 +0000 UTC m=+0.103667308 container init 287c940ddcd3c36cf4bcfe9e0adf6ab56dfcc0b6a239c3fbc1a3d101e251dbc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_turing, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 06:48:56 compute-1 podman[77377]: 2026-01-31 06:48:56.468625159 +0000 UTC m=+0.109207919 container start 287c940ddcd3c36cf4bcfe9e0adf6ab56dfcc0b6a239c3fbc1a3d101e251dbc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_turing, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:48:56 compute-1 podman[77377]: 2026-01-31 06:48:56.47373915 +0000 UTC m=+0.114321940 container attach 287c940ddcd3c36cf4bcfe9e0adf6ab56dfcc0b6a239c3fbc1a3d101e251dbc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 06:48:56 compute-1 podman[77377]: 2026-01-31 06:48:56.379892694 +0000 UTC m=+0.020475494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:48:57 compute-1 distracted_turing[77394]: --> passed data devices: 0 physical, 1 LVM
Jan 31 06:48:57 compute-1 distracted_turing[77394]: --> relative data size: 1.0
Jan 31 06:48:57 compute-1 distracted_turing[77394]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 06:48:57 compute-1 distracted_turing[77394]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 46b4bff0-70b0-4ed6-b674-df49592cba42
Jan 31 06:48:57 compute-1 distracted_turing[77394]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 06:48:57 compute-1 lvm[77441]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:48:57 compute-1 lvm[77441]: VG ceph_vg0 finished
Jan 31 06:48:57 compute-1 distracted_turing[77394]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 31 06:48:57 compute-1 distracted_turing[77394]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 31 06:48:57 compute-1 distracted_turing[77394]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 06:48:57 compute-1 distracted_turing[77394]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 31 06:48:57 compute-1 distracted_turing[77394]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 31 06:48:58 compute-1 distracted_turing[77394]:  stderr: got monmap epoch 1
Jan 31 06:48:58 compute-1 distracted_turing[77394]: --> Creating keyring file for osd.1
Jan 31 06:48:58 compute-1 distracted_turing[77394]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 31 06:48:58 compute-1 distracted_turing[77394]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 31 06:48:58 compute-1 distracted_turing[77394]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 46b4bff0-70b0-4ed6-b674-df49592cba42 --setuser ceph --setgroup ceph
Jan 31 06:49:01 compute-1 distracted_turing[77394]:  stderr: 2026-01-31T06:48:58.333+0000 7f15ebf66740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 31 06:49:01 compute-1 distracted_turing[77394]:  stderr: 2026-01-31T06:48:58.333+0000 7f15ebf66740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 31 06:49:01 compute-1 distracted_turing[77394]:  stderr: 2026-01-31T06:48:58.333+0000 7f15ebf66740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 31 06:49:01 compute-1 distracted_turing[77394]:  stderr: 2026-01-31T06:48:58.333+0000 7f15ebf66740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 31 06:49:01 compute-1 distracted_turing[77394]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 31 06:49:01 compute-1 distracted_turing[77394]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 06:49:01 compute-1 distracted_turing[77394]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 31 06:49:01 compute-1 distracted_turing[77394]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 31 06:49:01 compute-1 distracted_turing[77394]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 31 06:49:01 compute-1 distracted_turing[77394]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 06:49:01 compute-1 distracted_turing[77394]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 06:49:01 compute-1 distracted_turing[77394]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 31 06:49:01 compute-1 distracted_turing[77394]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 31 06:49:01 compute-1 systemd[1]: libpod-287c940ddcd3c36cf4bcfe9e0adf6ab56dfcc0b6a239c3fbc1a3d101e251dbc0.scope: Deactivated successfully.
Jan 31 06:49:01 compute-1 systemd[1]: libpod-287c940ddcd3c36cf4bcfe9e0adf6ab56dfcc0b6a239c3fbc1a3d101e251dbc0.scope: Consumed 2.177s CPU time.
Jan 31 06:49:01 compute-1 podman[78354]: 2026-01-31 06:49:01.609811659 +0000 UTC m=+0.026221650 container died 287c940ddcd3c36cf4bcfe9e0adf6ab56dfcc0b6a239c3fbc1a3d101e251dbc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_turing, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 06:49:02 compute-1 systemd[1]: var-lib-containers-storage-overlay-a732f5f56904a7455680a764e863031652cb587fd8d8cdb0d9d953fda959c784-merged.mount: Deactivated successfully.
Jan 31 06:49:02 compute-1 podman[78354]: 2026-01-31 06:49:02.457491492 +0000 UTC m=+0.873901473 container remove 287c940ddcd3c36cf4bcfe9e0adf6ab56dfcc0b6a239c3fbc1a3d101e251dbc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:49:02 compute-1 systemd[1]: libpod-conmon-287c940ddcd3c36cf4bcfe9e0adf6ab56dfcc0b6a239c3fbc1a3d101e251dbc0.scope: Deactivated successfully.
Jan 31 06:49:02 compute-1 sudo[77272]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:02 compute-1 sudo[78370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:49:02 compute-1 sudo[78370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:02 compute-1 sudo[78370]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:02 compute-1 sudo[78395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:49:02 compute-1 sudo[78395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:02 compute-1 sudo[78395]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:02 compute-1 sudo[78420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:49:02 compute-1 sudo[78420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:02 compute-1 sudo[78420]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:02 compute-1 sudo[78445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -- lvm list --format json
Jan 31 06:49:02 compute-1 sudo[78445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:03 compute-1 podman[78510]: 2026-01-31 06:49:02.935968459 +0000 UTC m=+0.019463578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:49:03 compute-1 podman[78510]: 2026-01-31 06:49:03.102994095 +0000 UTC m=+0.186489194 container create 7875d5b06808e77487fa9caf46a6453c948e04d9a408f20a96bbff3b103c0ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_gagarin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:49:03 compute-1 systemd[1]: Started libpod-conmon-7875d5b06808e77487fa9caf46a6453c948e04d9a408f20a96bbff3b103c0ec9.scope.
Jan 31 06:49:03 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:49:03 compute-1 podman[78510]: 2026-01-31 06:49:03.346667626 +0000 UTC m=+0.430162745 container init 7875d5b06808e77487fa9caf46a6453c948e04d9a408f20a96bbff3b103c0ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_gagarin, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 06:49:03 compute-1 podman[78510]: 2026-01-31 06:49:03.356747723 +0000 UTC m=+0.440242822 container start 7875d5b06808e77487fa9caf46a6453c948e04d9a408f20a96bbff3b103c0ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 06:49:03 compute-1 recursing_gagarin[78526]: 167 167
Jan 31 06:49:03 compute-1 systemd[1]: libpod-7875d5b06808e77487fa9caf46a6453c948e04d9a408f20a96bbff3b103c0ec9.scope: Deactivated successfully.
Jan 31 06:49:03 compute-1 conmon[78526]: conmon 7875d5b06808e77487fa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7875d5b06808e77487fa9caf46a6453c948e04d9a408f20a96bbff3b103c0ec9.scope/container/memory.events
Jan 31 06:49:03 compute-1 podman[78510]: 2026-01-31 06:49:03.375095321 +0000 UTC m=+0.458590440 container attach 7875d5b06808e77487fa9caf46a6453c948e04d9a408f20a96bbff3b103c0ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 06:49:03 compute-1 podman[78510]: 2026-01-31 06:49:03.375712537 +0000 UTC m=+0.459207636 container died 7875d5b06808e77487fa9caf46a6453c948e04d9a408f20a96bbff3b103c0ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:49:03 compute-1 systemd[1]: var-lib-containers-storage-overlay-95545cce8bf111403fa3cf8bfbb279794fbeb11d31906c2047bfc47518a76067-merged.mount: Deactivated successfully.
Jan 31 06:49:03 compute-1 podman[78510]: 2026-01-31 06:49:03.464200377 +0000 UTC m=+0.547695476 container remove 7875d5b06808e77487fa9caf46a6453c948e04d9a408f20a96bbff3b103c0ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_gagarin, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 06:49:03 compute-1 systemd[1]: libpod-conmon-7875d5b06808e77487fa9caf46a6453c948e04d9a408f20a96bbff3b103c0ec9.scope: Deactivated successfully.
Jan 31 06:49:03 compute-1 podman[78549]: 2026-01-31 06:49:03.568924791 +0000 UTC m=+0.039965352 container create 61b2d2c64f1eef628394f34cc26ee7a274aeaef0303a6e3b1eda7d3160479c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:49:03 compute-1 systemd[1]: Started libpod-conmon-61b2d2c64f1eef628394f34cc26ee7a274aeaef0303a6e3b1eda7d3160479c5a.scope.
Jan 31 06:49:03 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:49:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d6d5d9df2ee28a37a61b4fa615f54d1a0ae01b77de1a372e9f1bc43a951f019/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d6d5d9df2ee28a37a61b4fa615f54d1a0ae01b77de1a372e9f1bc43a951f019/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d6d5d9df2ee28a37a61b4fa615f54d1a0ae01b77de1a372e9f1bc43a951f019/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d6d5d9df2ee28a37a61b4fa615f54d1a0ae01b77de1a372e9f1bc43a951f019/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:03 compute-1 podman[78549]: 2026-01-31 06:49:03.639486382 +0000 UTC m=+0.110526963 container init 61b2d2c64f1eef628394f34cc26ee7a274aeaef0303a6e3b1eda7d3160479c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_johnson, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 06:49:03 compute-1 podman[78549]: 2026-01-31 06:49:03.645574278 +0000 UTC m=+0.116614839 container start 61b2d2c64f1eef628394f34cc26ee7a274aeaef0303a6e3b1eda7d3160479c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_johnson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:49:03 compute-1 podman[78549]: 2026-01-31 06:49:03.550924421 +0000 UTC m=+0.021965002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:49:03 compute-1 podman[78549]: 2026-01-31 06:49:03.651899449 +0000 UTC m=+0.122940010 container attach 61b2d2c64f1eef628394f34cc26ee7a274aeaef0303a6e3b1eda7d3160479c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_johnson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]: {
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:     "1": [
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:         {
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:             "devices": [
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:                 "/dev/loop3"
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:             ],
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:             "lv_name": "ceph_lv0",
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:             "lv_size": "7511998464",
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3Wpppw-5WD0-XGm2-nWHG-26cR-Inf3-cdW4BI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=46b4bff0-70b0-4ed6-b674-df49592cba42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:             "lv_uuid": "3Wpppw-5WD0-XGm2-nWHG-26cR-Inf3-cdW4BI",
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:             "name": "ceph_lv0",
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:             "tags": {
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:                 "ceph.block_uuid": "3Wpppw-5WD0-XGm2-nWHG-26cR-Inf3-cdW4BI",
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:                 "ceph.cluster_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:                 "ceph.cluster_name": "ceph",
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:                 "ceph.crush_device_class": "",
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:                 "ceph.encrypted": "0",
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:                 "ceph.osd_fsid": "46b4bff0-70b0-4ed6-b674-df49592cba42",
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:                 "ceph.osd_id": "1",
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:                 "ceph.type": "block",
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:                 "ceph.vdo": "0"
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:             },
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:             "type": "block",
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:             "vg_name": "ceph_vg0"
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:         }
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]:     ]
Jan 31 06:49:04 compute-1 vibrant_johnson[78566]: }
Jan 31 06:49:04 compute-1 systemd[1]: libpod-61b2d2c64f1eef628394f34cc26ee7a274aeaef0303a6e3b1eda7d3160479c5a.scope: Deactivated successfully.
Jan 31 06:49:04 compute-1 podman[78549]: 2026-01-31 06:49:04.403011397 +0000 UTC m=+0.874051988 container died 61b2d2c64f1eef628394f34cc26ee7a274aeaef0303a6e3b1eda7d3160479c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 06:49:04 compute-1 systemd[1]: var-lib-containers-storage-overlay-1d6d5d9df2ee28a37a61b4fa615f54d1a0ae01b77de1a372e9f1bc43a951f019-merged.mount: Deactivated successfully.
Jan 31 06:49:04 compute-1 podman[78549]: 2026-01-31 06:49:04.463472071 +0000 UTC m=+0.934512622 container remove 61b2d2c64f1eef628394f34cc26ee7a274aeaef0303a6e3b1eda7d3160479c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:49:04 compute-1 systemd[1]: libpod-conmon-61b2d2c64f1eef628394f34cc26ee7a274aeaef0303a6e3b1eda7d3160479c5a.scope: Deactivated successfully.
Jan 31 06:49:04 compute-1 sudo[78445]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:04 compute-1 sudo[78585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:49:04 compute-1 sudo[78585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:04 compute-1 sudo[78585]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:04 compute-1 sudo[78610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:49:04 compute-1 sudo[78610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:04 compute-1 sudo[78610]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:04 compute-1 sudo[78635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:49:04 compute-1 sudo[78635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:04 compute-1 sudo[78635]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:04 compute-1 sudo[78660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 06:49:04 compute-1 sudo[78660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:04 compute-1 podman[78725]: 2026-01-31 06:49:04.952032676 +0000 UTC m=+0.033645911 container create 886d88c424402a670d2afca29e2687d5f53915c968ed7acbd7d013979a344c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 06:49:04 compute-1 systemd[1]: Started libpod-conmon-886d88c424402a670d2afca29e2687d5f53915c968ed7acbd7d013979a344c36.scope.
Jan 31 06:49:04 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:49:05 compute-1 podman[78725]: 2026-01-31 06:49:05.006722902 +0000 UTC m=+0.088336157 container init 886d88c424402a670d2afca29e2687d5f53915c968ed7acbd7d013979a344c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:49:05 compute-1 podman[78725]: 2026-01-31 06:49:05.011661778 +0000 UTC m=+0.093275013 container start 886d88c424402a670d2afca29e2687d5f53915c968ed7acbd7d013979a344c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hofstadter, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:49:05 compute-1 podman[78725]: 2026-01-31 06:49:05.014808948 +0000 UTC m=+0.096422203 container attach 886d88c424402a670d2afca29e2687d5f53915c968ed7acbd7d013979a344c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hofstadter, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:49:05 compute-1 busy_hofstadter[78742]: 167 167
Jan 31 06:49:05 compute-1 systemd[1]: libpod-886d88c424402a670d2afca29e2687d5f53915c968ed7acbd7d013979a344c36.scope: Deactivated successfully.
Jan 31 06:49:05 compute-1 podman[78725]: 2026-01-31 06:49:05.016209124 +0000 UTC m=+0.097822359 container died 886d88c424402a670d2afca29e2687d5f53915c968ed7acbd7d013979a344c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:49:05 compute-1 podman[78725]: 2026-01-31 06:49:04.936683114 +0000 UTC m=+0.018296369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:49:05 compute-1 systemd[1]: var-lib-containers-storage-overlay-0cc254041d60e85d72c323b04b966e86cf6d5d36be5c7eae65c63bdcef452fd8-merged.mount: Deactivated successfully.
Jan 31 06:49:05 compute-1 podman[78725]: 2026-01-31 06:49:05.051728681 +0000 UTC m=+0.133341926 container remove 886d88c424402a670d2afca29e2687d5f53915c968ed7acbd7d013979a344c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hofstadter, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:49:05 compute-1 systemd[1]: libpod-conmon-886d88c424402a670d2afca29e2687d5f53915c968ed7acbd7d013979a344c36.scope: Deactivated successfully.
Jan 31 06:49:05 compute-1 podman[78773]: 2026-01-31 06:49:05.218593612 +0000 UTC m=+0.033325512 container create 309b15592ebe46cd8195851dac01724a9b2a700a47c6aa4cdf474fecf5b47d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate-test, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 06:49:05 compute-1 systemd[1]: Started libpod-conmon-309b15592ebe46cd8195851dac01724a9b2a700a47c6aa4cdf474fecf5b47d9d.scope.
Jan 31 06:49:05 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:49:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0848018d3bf0cba36f229a354efb09fdc86c04bc4481529f0b5f2ee952be9079/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0848018d3bf0cba36f229a354efb09fdc86c04bc4481529f0b5f2ee952be9079/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0848018d3bf0cba36f229a354efb09fdc86c04bc4481529f0b5f2ee952be9079/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0848018d3bf0cba36f229a354efb09fdc86c04bc4481529f0b5f2ee952be9079/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0848018d3bf0cba36f229a354efb09fdc86c04bc4481529f0b5f2ee952be9079/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:05 compute-1 podman[78773]: 2026-01-31 06:49:05.278230184 +0000 UTC m=+0.092962104 container init 309b15592ebe46cd8195851dac01724a9b2a700a47c6aa4cdf474fecf5b47d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 06:49:05 compute-1 podman[78773]: 2026-01-31 06:49:05.2843211 +0000 UTC m=+0.099053000 container start 309b15592ebe46cd8195851dac01724a9b2a700a47c6aa4cdf474fecf5b47d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate-test, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 06:49:05 compute-1 podman[78773]: 2026-01-31 06:49:05.203744612 +0000 UTC m=+0.018476542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:49:05 compute-1 podman[78773]: 2026-01-31 06:49:05.303573671 +0000 UTC m=+0.118305571 container attach 309b15592ebe46cd8195851dac01724a9b2a700a47c6aa4cdf474fecf5b47d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate-test, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 06:49:05 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate-test[78789]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Jan 31 06:49:05 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate-test[78789]:                             [--no-systemd] [--no-tmpfs]
Jan 31 06:49:05 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate-test[78789]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 06:49:05 compute-1 systemd[1]: libpod-309b15592ebe46cd8195851dac01724a9b2a700a47c6aa4cdf474fecf5b47d9d.scope: Deactivated successfully.
Jan 31 06:49:05 compute-1 podman[78773]: 2026-01-31 06:49:05.980220668 +0000 UTC m=+0.794952558 container died 309b15592ebe46cd8195851dac01724a9b2a700a47c6aa4cdf474fecf5b47d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate-test, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 06:49:06 compute-1 systemd[1]: var-lib-containers-storage-overlay-0848018d3bf0cba36f229a354efb09fdc86c04bc4481529f0b5f2ee952be9079-merged.mount: Deactivated successfully.
Jan 31 06:49:06 compute-1 podman[78773]: 2026-01-31 06:49:06.032929883 +0000 UTC m=+0.847661783 container remove 309b15592ebe46cd8195851dac01724a9b2a700a47c6aa4cdf474fecf5b47d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate-test, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 06:49:06 compute-1 systemd[1]: libpod-conmon-309b15592ebe46cd8195851dac01724a9b2a700a47c6aa4cdf474fecf5b47d9d.scope: Deactivated successfully.
Jan 31 06:49:06 compute-1 systemd[1]: Reloading.
Jan 31 06:49:06 compute-1 systemd-rc-local-generator[78851]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:49:06 compute-1 systemd-sysv-generator[78856]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:49:06 compute-1 systemd[1]: Reloading.
Jan 31 06:49:06 compute-1 systemd-rc-local-generator[78889]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:49:06 compute-1 systemd-sysv-generator[78895]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:49:06 compute-1 systemd[1]: Starting Ceph osd.1 for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a...
Jan 31 06:49:06 compute-1 podman[78952]: 2026-01-31 06:49:06.841536649 +0000 UTC m=+0.079253004 container create bfc74af43361b59e336c3605497566f94c9bd2fb302aaec90e5bdf228a3dbbcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 06:49:06 compute-1 podman[78952]: 2026-01-31 06:49:06.781156957 +0000 UTC m=+0.018873342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:49:06 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:49:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/217bd41e820044de8ed145dfc1c95c93ec18224031689a81315540a96ab65df5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/217bd41e820044de8ed145dfc1c95c93ec18224031689a81315540a96ab65df5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/217bd41e820044de8ed145dfc1c95c93ec18224031689a81315540a96ab65df5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/217bd41e820044de8ed145dfc1c95c93ec18224031689a81315540a96ab65df5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/217bd41e820044de8ed145dfc1c95c93ec18224031689a81315540a96ab65df5/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:06 compute-1 podman[78952]: 2026-01-31 06:49:06.929345621 +0000 UTC m=+0.167062006 container init bfc74af43361b59e336c3605497566f94c9bd2fb302aaec90e5bdf228a3dbbcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:49:06 compute-1 podman[78952]: 2026-01-31 06:49:06.933671702 +0000 UTC m=+0.171388057 container start bfc74af43361b59e336c3605497566f94c9bd2fb302aaec90e5bdf228a3dbbcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 06:49:06 compute-1 podman[78952]: 2026-01-31 06:49:06.937617852 +0000 UTC m=+0.175334197 container attach bfc74af43361b59e336c3605497566f94c9bd2fb302aaec90e5bdf228a3dbbcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 06:49:07 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate[78967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 06:49:07 compute-1 bash[78952]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 06:49:07 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate[78967]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 06:49:07 compute-1 bash[78952]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 06:49:07 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate[78967]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 06:49:07 compute-1 bash[78952]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 06:49:07 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate[78967]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 06:49:07 compute-1 bash[78952]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 06:49:07 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate[78967]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 31 06:49:07 compute-1 bash[78952]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 31 06:49:07 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate[78967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 06:49:07 compute-1 bash[78952]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 06:49:07 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate[78967]: --> ceph-volume raw activate successful for osd ID: 1
Jan 31 06:49:07 compute-1 bash[78952]: --> ceph-volume raw activate successful for osd ID: 1
Jan 31 06:49:07 compute-1 podman[78952]: 2026-01-31 06:49:07.855392286 +0000 UTC m=+1.093108641 container died bfc74af43361b59e336c3605497566f94c9bd2fb302aaec90e5bdf228a3dbbcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 06:49:07 compute-1 systemd[1]: libpod-bfc74af43361b59e336c3605497566f94c9bd2fb302aaec90e5bdf228a3dbbcf.scope: Deactivated successfully.
Jan 31 06:49:07 compute-1 systemd[1]: var-lib-containers-storage-overlay-217bd41e820044de8ed145dfc1c95c93ec18224031689a81315540a96ab65df5-merged.mount: Deactivated successfully.
Jan 31 06:49:07 compute-1 podman[78952]: 2026-01-31 06:49:07.911362825 +0000 UTC m=+1.149079180 container remove bfc74af43361b59e336c3605497566f94c9bd2fb302aaec90e5bdf228a3dbbcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 06:49:08 compute-1 podman[79126]: 2026-01-31 06:49:08.061718364 +0000 UTC m=+0.033323042 container create d693486e2936bdc4233e91f759d06c9322123638d224bc8e25f41f207ab9f93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 06:49:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3dea19dc46d40637869a5f144036893ebbce19d4a50ea355f8ee38aa0d410e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3dea19dc46d40637869a5f144036893ebbce19d4a50ea355f8ee38aa0d410e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3dea19dc46d40637869a5f144036893ebbce19d4a50ea355f8ee38aa0d410e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3dea19dc46d40637869a5f144036893ebbce19d4a50ea355f8ee38aa0d410e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3dea19dc46d40637869a5f144036893ebbce19d4a50ea355f8ee38aa0d410e/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:08 compute-1 podman[79126]: 2026-01-31 06:49:08.130736306 +0000 UTC m=+0.102341004 container init d693486e2936bdc4233e91f759d06c9322123638d224bc8e25f41f207ab9f93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 06:49:08 compute-1 podman[79126]: 2026-01-31 06:49:08.135331284 +0000 UTC m=+0.106935962 container start d693486e2936bdc4233e91f759d06c9322123638d224bc8e25f41f207ab9f93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:49:08 compute-1 podman[79126]: 2026-01-31 06:49:08.046544567 +0000 UTC m=+0.018149275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:49:08 compute-1 bash[79126]: d693486e2936bdc4233e91f759d06c9322123638d224bc8e25f41f207ab9f93f
Jan 31 06:49:08 compute-1 ceph-osd[79145]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 06:49:08 compute-1 systemd[1]: Started Ceph osd.1 for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a.
Jan 31 06:49:08 compute-1 ceph-osd[79145]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Jan 31 06:49:08 compute-1 ceph-osd[79145]: pidfile_write: ignore empty --pid-file
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bdev(0x55c83dcf7800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bdev(0x55c83dcf7800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bdev(0x55c83dcf7800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bdev(0x55c83dcf7800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bdev(0x55c83eb2f800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bdev(0x55c83eb2f800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bdev(0x55c83eb2f800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bdev(0x55c83eb2f800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bdev(0x55c83eb2f800 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 06:49:08 compute-1 sudo[78660]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:08 compute-1 sudo[79158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:49:08 compute-1 sudo[79158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:08 compute-1 sudo[79158]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:08 compute-1 sudo[79183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:49:08 compute-1 sudo[79183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:08 compute-1 sudo[79183]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:08 compute-1 sudo[79208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:49:08 compute-1 sudo[79208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:08 compute-1 sudo[79208]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:08 compute-1 sudo[79233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -- raw list --format json
Jan 31 06:49:08 compute-1 sudo[79233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bdev(0x55c83dcf7800 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 06:49:08 compute-1 ceph-osd[79145]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 31 06:49:08 compute-1 ceph-osd[79145]: load: jerasure load: lrc 
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bdev(0x55c83ebb0c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bdev(0x55c83ebb0c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bdev(0x55c83ebb0c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bdev(0x55c83ebb0c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bdev(0x55c83ebb0c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 06:49:08 compute-1 podman[79300]: 2026-01-31 06:49:08.718275188 +0000 UTC m=+0.066633412 container create bd34465c5740bd24d99dd4e6b020417ecf01320450f54871727e47ebfb3516a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_perlman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:49:08 compute-1 podman[79300]: 2026-01-31 06:49:08.670742004 +0000 UTC m=+0.019100248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:49:08 compute-1 systemd[1]: Started libpod-conmon-bd34465c5740bd24d99dd4e6b020417ecf01320450f54871727e47ebfb3516a0.scope.
Jan 31 06:49:08 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:49:08 compute-1 podman[79300]: 2026-01-31 06:49:08.801171174 +0000 UTC m=+0.149529428 container init bd34465c5740bd24d99dd4e6b020417ecf01320450f54871727e47ebfb3516a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_perlman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 06:49:08 compute-1 podman[79300]: 2026-01-31 06:49:08.807420044 +0000 UTC m=+0.155778268 container start bd34465c5740bd24d99dd4e6b020417ecf01320450f54871727e47ebfb3516a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_perlman, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 06:49:08 compute-1 jovial_perlman[79321]: 167 167
Jan 31 06:49:08 compute-1 systemd[1]: libpod-bd34465c5740bd24d99dd4e6b020417ecf01320450f54871727e47ebfb3516a0.scope: Deactivated successfully.
Jan 31 06:49:08 compute-1 podman[79300]: 2026-01-31 06:49:08.825918716 +0000 UTC m=+0.174276970 container attach bd34465c5740bd24d99dd4e6b020417ecf01320450f54871727e47ebfb3516a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:49:08 compute-1 podman[79300]: 2026-01-31 06:49:08.827987149 +0000 UTC m=+0.176345393 container died bd34465c5740bd24d99dd4e6b020417ecf01320450f54871727e47ebfb3516a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_perlman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 06:49:08 compute-1 systemd[1]: var-lib-containers-storage-overlay-b2a2d734e65bad360a087a3833bc6483166561f2b2f2efb94986f1753af0e287-merged.mount: Deactivated successfully.
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bdev(0x55c83ebb0c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bdev(0x55c83ebb0c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bdev(0x55c83ebb0c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bdev(0x55c83ebb0c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 06:49:08 compute-1 ceph-osd[79145]: bdev(0x55c83ebb0c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 06:49:08 compute-1 podman[79300]: 2026-01-31 06:49:08.969668937 +0000 UTC m=+0.318027161 container remove bd34465c5740bd24d99dd4e6b020417ecf01320450f54871727e47ebfb3516a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 06:49:08 compute-1 systemd[1]: libpod-conmon-bd34465c5740bd24d99dd4e6b020417ecf01320450f54871727e47ebfb3516a0.scope: Deactivated successfully.
Jan 31 06:49:09 compute-1 podman[79350]: 2026-01-31 06:49:09.104596272 +0000 UTC m=+0.052039720 container create 582df423b338dbf5bcfd01a199b9e17bc859974fe21d52af14a4075d56627dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_leakey, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 06:49:09 compute-1 podman[79350]: 2026-01-31 06:49:09.073321063 +0000 UTC m=+0.020764531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 06:49:09 compute-1 ceph-osd[79145]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bdev(0x55c83ebb0c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bdev(0x55c83ebb0c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bdev(0x55c83ebb0c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bdev(0x55c83ebb0c00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bdev(0x55c83ebb1400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bdev(0x55c83ebb1400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bdev(0x55c83ebb1400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bdev(0x55c83ebb1400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluefs mount
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluefs mount shared_bdev_used = 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: RocksDB version: 7.9.2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Git sha 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: DB SUMMARY
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: DB Session ID:  PNSMXWVGIWE4PL0SKE5G
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: CURRENT file:  CURRENT
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                         Options.error_if_exists: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.create_if_missing: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                                     Options.env: 0x55c83eb81c70
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                                Options.info_log: 0x55c83dd74ba0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                              Options.statistics: (nil)
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.use_fsync: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                              Options.db_log_dir: 
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.write_buffer_manager: 0x55c83ec94460
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.unordered_write: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.row_cache: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                              Options.wal_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.two_write_queues: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.wal_compression: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.atomic_flush: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.max_background_jobs: 4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.max_background_compactions: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.max_subcompactions: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.max_open_files: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Compression algorithms supported:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         kZSTD supported: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         kXpressCompression supported: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         kBZip2Compression supported: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         kLZ4Compression supported: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         kZlibCompression supported: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         kSnappyCompression supported: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c83dd74600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c83dd6add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.compression: LZ4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.merge_operator: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c83dd74600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c83dd6add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.compression: LZ4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.merge_operator: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c83dd74600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c83dd6add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.compression: LZ4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.merge_operator: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c83dd74600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c83dd6add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.compression: LZ4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.merge_operator: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c83dd74600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c83dd6add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.compression: LZ4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.merge_operator: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c83dd74600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c83dd6add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.compression: LZ4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.merge_operator: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c83dd74600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c83dd6add0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.compression: LZ4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.merge_operator: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c83dd745c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c83dd6a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.compression: LZ4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.merge_operator: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c83dd745c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c83dd6a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.compression: LZ4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.merge_operator: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c83dd745c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c83dd6a430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.compression: LZ4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a9d8c63d-28a7-49fe-927c-5351bb9d15e4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842149247255, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842149247428, "job": 1, "event": "recovery_finished"}
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: freelist init
Jan 31 06:49:09 compute-1 ceph-osd[79145]: freelist _read_cfg
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluefs umount
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bdev(0x55c83ebb1400 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 06:49:09 compute-1 systemd[1]: Started libpod-conmon-582df423b338dbf5bcfd01a199b9e17bc859974fe21d52af14a4075d56627dc1.scope.
Jan 31 06:49:09 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:49:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b4f2fc0ee66125511b84f6bd9e9e6cae06f1415854154f9b3594cc735837cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b4f2fc0ee66125511b84f6bd9e9e6cae06f1415854154f9b3594cc735837cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b4f2fc0ee66125511b84f6bd9e9e6cae06f1415854154f9b3594cc735837cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b4f2fc0ee66125511b84f6bd9e9e6cae06f1415854154f9b3594cc735837cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:09 compute-1 podman[79350]: 2026-01-31 06:49:09.402484118 +0000 UTC m=+0.349927586 container init 582df423b338dbf5bcfd01a199b9e17bc859974fe21d52af14a4075d56627dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 06:49:09 compute-1 podman[79350]: 2026-01-31 06:49:09.407461725 +0000 UTC m=+0.354905173 container start 582df423b338dbf5bcfd01a199b9e17bc859974fe21d52af14a4075d56627dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 06:49:09 compute-1 podman[79350]: 2026-01-31 06:49:09.411363164 +0000 UTC m=+0.358806612 container attach 582df423b338dbf5bcfd01a199b9e17bc859974fe21d52af14a4075d56627dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bdev(0x55c83ebb1400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bdev(0x55c83ebb1400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bdev(0x55c83ebb1400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bdev(0x55c83ebb1400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluefs mount
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluefs mount shared_bdev_used = 4718592
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: RocksDB version: 7.9.2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Git sha 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: DB SUMMARY
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: DB Session ID:  PNSMXWVGIWE4PL0SKE5H
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: CURRENT file:  CURRENT
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                         Options.error_if_exists: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.create_if_missing: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                                     Options.env: 0x55c83eb81c70
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                                Options.info_log: 0x55c83dd51380
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                              Options.statistics: (nil)
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.use_fsync: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                              Options.db_log_dir: 
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.write_buffer_manager: 0x55c83ec94960
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.unordered_write: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.row_cache: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                              Options.wal_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.two_write_queues: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.wal_compression: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.atomic_flush: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.max_background_jobs: 4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.max_background_compactions: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.max_subcompactions: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.max_open_files: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Compression algorithms supported:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         kZSTD supported: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         kXpressCompression supported: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         kBZip2Compression supported: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         kLZ4Compression supported: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         kZlibCompression supported: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         kSnappyCompression supported: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c83eb7d120)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c83dd6b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.compression: LZ4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.merge_operator: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c83eb7d120)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c83dd6b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.compression: LZ4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.merge_operator: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c83eb7d120)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c83dd6b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.compression: LZ4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.merge_operator: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c83eb7d120)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c83dd6b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.compression: LZ4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.merge_operator: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c83eb7d120)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c83dd6b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.compression: LZ4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.merge_operator: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c83eb7d120)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c83dd6b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.compression: LZ4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.merge_operator: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c83eb7d120)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c83dd6b1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.compression: LZ4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.merge_operator: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c83dd75680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c83dd6b610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.compression: LZ4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.merge_operator: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c83dd75680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c83dd6b610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.compression: LZ4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:           Options.merge_operator: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c83dd75680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c83dd6b610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.compression: LZ4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a9d8c63d-28a7-49fe-927c-5351bb9d15e4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842149547603, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842149561644, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842149, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9d8c63d-28a7-49fe-927c-5351bb9d15e4", "db_session_id": "PNSMXWVGIWE4PL0SKE5H", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842149565850, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842149, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9d8c63d-28a7-49fe-927c-5351bb9d15e4", "db_session_id": "PNSMXWVGIWE4PL0SKE5H", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842149573130, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842149, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a9d8c63d-28a7-49fe-927c-5351bb9d15e4", "db_session_id": "PNSMXWVGIWE4PL0SKE5H", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842149576139, "job": 1, "event": "recovery_finished"}
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c83ed5a380
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: DB pointer 0x55c83ec73a00
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 31 06:49:09 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:49:09 compute-1 ceph-osd[79145]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 06:49:09 compute-1 ceph-osd[79145]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 06:49:09 compute-1 ceph-osd[79145]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 06:49:09 compute-1 ceph-osd[79145]: _get_class not permitted to load lua
Jan 31 06:49:09 compute-1 ceph-osd[79145]: _get_class not permitted to load sdk
Jan 31 06:49:09 compute-1 ceph-osd[79145]: _get_class not permitted to load test_remote_reads
Jan 31 06:49:09 compute-1 ceph-osd[79145]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 06:49:09 compute-1 ceph-osd[79145]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 06:49:09 compute-1 ceph-osd[79145]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 06:49:09 compute-1 ceph-osd[79145]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 06:49:09 compute-1 ceph-osd[79145]: osd.1 0 load_pgs
Jan 31 06:49:09 compute-1 ceph-osd[79145]: osd.1 0 load_pgs opened 0 pgs
Jan 31 06:49:09 compute-1 ceph-osd[79145]: osd.1 0 log_to_monitors true
Jan 31 06:49:09 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1[79141]: 2026-01-31T06:49:09.617+0000 7f82eaba9740 -1 osd.1 0 log_to_monitors true
Jan 31 06:49:10 compute-1 thirsty_leakey[79560]: {
Jan 31 06:49:10 compute-1 thirsty_leakey[79560]:     "46b4bff0-70b0-4ed6-b674-df49592cba42": {
Jan 31 06:49:10 compute-1 thirsty_leakey[79560]:         "ceph_fsid": "ef73c6e0-6d85-55c2-9347-1f544d3e3d3a",
Jan 31 06:49:10 compute-1 thirsty_leakey[79560]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 06:49:10 compute-1 thirsty_leakey[79560]:         "osd_id": 1,
Jan 31 06:49:10 compute-1 thirsty_leakey[79560]:         "osd_uuid": "46b4bff0-70b0-4ed6-b674-df49592cba42",
Jan 31 06:49:10 compute-1 thirsty_leakey[79560]:         "type": "bluestore"
Jan 31 06:49:10 compute-1 thirsty_leakey[79560]:     }
Jan 31 06:49:10 compute-1 thirsty_leakey[79560]: }
Jan 31 06:49:10 compute-1 systemd[1]: libpod-582df423b338dbf5bcfd01a199b9e17bc859974fe21d52af14a4075d56627dc1.scope: Deactivated successfully.
Jan 31 06:49:10 compute-1 podman[79350]: 2026-01-31 06:49:10.199046645 +0000 UTC m=+1.146490113 container died 582df423b338dbf5bcfd01a199b9e17bc859974fe21d52af14a4075d56627dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:49:10 compute-1 systemd[1]: var-lib-containers-storage-overlay-62b4f2fc0ee66125511b84f6bd9e9e6cae06f1415854154f9b3594cc735837cf-merged.mount: Deactivated successfully.
Jan 31 06:49:10 compute-1 podman[79350]: 2026-01-31 06:49:10.272401678 +0000 UTC m=+1.219845126 container remove 582df423b338dbf5bcfd01a199b9e17bc859974fe21d52af14a4075d56627dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_leakey, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 06:49:10 compute-1 systemd[1]: libpod-conmon-582df423b338dbf5bcfd01a199b9e17bc859974fe21d52af14a4075d56627dc1.scope: Deactivated successfully.
Jan 31 06:49:10 compute-1 sudo[79233]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:10 compute-1 sudo[79807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:49:10 compute-1 sudo[79807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:10 compute-1 sudo[79807]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:10 compute-1 sudo[79832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:49:10 compute-1 sudo[79832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:10 compute-1 sudo[79832]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:10 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 06:49:10 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 06:49:10 compute-1 sudo[79857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:49:10 compute-1 sudo[79857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:10 compute-1 sudo[79857]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:10 compute-1 sudo[79882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:49:10 compute-1 sudo[79882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:10 compute-1 sudo[79882]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:10 compute-1 sudo[79907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:49:10 compute-1 sudo[79907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:10 compute-1 sudo[79907]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:10 compute-1 sudo[79932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 06:49:10 compute-1 sudo[79932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:11 compute-1 podman[80029]: 2026-01-31 06:49:11.251855041 +0000 UTC m=+0.052201641 container exec 6672f2dbf6180479da9a6d4a3be2a0622c308e279fdc39713e442740f41af4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 06:49:11 compute-1 ceph-osd[79145]: osd.1 0 done with init, starting boot process
Jan 31 06:49:11 compute-1 ceph-osd[79145]: osd.1 0 start_boot
Jan 31 06:49:11 compute-1 ceph-osd[79145]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 06:49:11 compute-1 ceph-osd[79145]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 06:49:11 compute-1 ceph-osd[79145]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 06:49:11 compute-1 ceph-osd[79145]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 06:49:11 compute-1 ceph-osd[79145]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 31 06:49:11 compute-1 podman[80029]: 2026-01-31 06:49:11.438308602 +0000 UTC m=+0.238655162 container exec_died 6672f2dbf6180479da9a6d4a3be2a0622c308e279fdc39713e442740f41af4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 06:49:11 compute-1 sudo[79932]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:11 compute-1 sudo[80078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:49:11 compute-1 sudo[80078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:11 compute-1 sudo[80078]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:11 compute-1 sudo[80103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:49:11 compute-1 sudo[80103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:11 compute-1 sudo[80103]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:11 compute-1 sudo[80128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:49:11 compute-1 sudo[80128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:11 compute-1 sudo[80128]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:11 compute-1 sudo[80154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a -- inventory --format=json-pretty --filter-for-batch
Jan 31 06:49:11 compute-1 sudo[80154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:12 compute-1 podman[80218]: 2026-01-31 06:49:12.067827461 +0000 UTC m=+0.017343804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:49:16 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.194890022s, txc = 0x55c83fcf7200
Jan 31 06:49:17 compute-1 podman[80218]: 2026-01-31 06:49:17.20972287 +0000 UTC m=+5.159239193 container create 40e4012ed30d84cc86de43b2019e764e537247340599f1d24f9cc92b059237e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 06:49:17 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.324919224s, txc = 0x55c83fcf7500
Jan 31 06:49:17 compute-1 systemd[1]: Started libpod-conmon-40e4012ed30d84cc86de43b2019e764e537247340599f1d24f9cc92b059237e0.scope.
Jan 31 06:49:17 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:49:18 compute-1 podman[80218]: 2026-01-31 06:49:18.075380944 +0000 UTC m=+6.024897297 container init 40e4012ed30d84cc86de43b2019e764e537247340599f1d24f9cc92b059237e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 06:49:18 compute-1 podman[80218]: 2026-01-31 06:49:18.081114125 +0000 UTC m=+6.030630448 container start 40e4012ed30d84cc86de43b2019e764e537247340599f1d24f9cc92b059237e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 06:49:18 compute-1 vibrant_bohr[80236]: 167 167
Jan 31 06:49:18 compute-1 systemd[1]: libpod-40e4012ed30d84cc86de43b2019e764e537247340599f1d24f9cc92b059237e0.scope: Deactivated successfully.
Jan 31 06:49:18 compute-1 podman[80218]: 2026-01-31 06:49:18.226405315 +0000 UTC m=+6.175921668 container attach 40e4012ed30d84cc86de43b2019e764e537247340599f1d24f9cc92b059237e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 06:49:18 compute-1 podman[80218]: 2026-01-31 06:49:18.226827806 +0000 UTC m=+6.176344129 container died 40e4012ed30d84cc86de43b2019e764e537247340599f1d24f9cc92b059237e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 06:49:18 compute-1 systemd[1]: var-lib-containers-storage-overlay-284b6f2c1d32dce4a71edb3adae590bd76f14022f0882ed888edfa4e97dcd3ff-merged.mount: Deactivated successfully.
Jan 31 06:49:18 compute-1 podman[80218]: 2026-01-31 06:49:18.715445627 +0000 UTC m=+6.664961950 container remove 40e4012ed30d84cc86de43b2019e764e537247340599f1d24f9cc92b059237e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 06:49:18 compute-1 systemd[1]: libpod-conmon-40e4012ed30d84cc86de43b2019e764e537247340599f1d24f9cc92b059237e0.scope: Deactivated successfully.
Jan 31 06:49:18 compute-1 podman[80261]: 2026-01-31 06:49:18.903534098 +0000 UTC m=+0.102328000 container create cf4fd68c9900a31c2c6acc7fb26bcd1055e7cc672fc52b4825b34a3daa901ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 06:49:18 compute-1 podman[80261]: 2026-01-31 06:49:18.828971616 +0000 UTC m=+0.027765568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:49:18 compute-1 systemd[1]: Started libpod-conmon-cf4fd68c9900a31c2c6acc7fb26bcd1055e7cc672fc52b4825b34a3daa901ba8.scope.
Jan 31 06:49:18 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:49:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7d093bf37614566559c73a0ab105de7153bc82a00ab1d1d12c8fedb487bcce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7d093bf37614566559c73a0ab105de7153bc82a00ab1d1d12c8fedb487bcce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7d093bf37614566559c73a0ab105de7153bc82a00ab1d1d12c8fedb487bcce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7d093bf37614566559c73a0ab105de7153bc82a00ab1d1d12c8fedb487bcce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:19 compute-1 podman[80261]: 2026-01-31 06:49:19.079547262 +0000 UTC m=+0.278341194 container init cf4fd68c9900a31c2c6acc7fb26bcd1055e7cc672fc52b4825b34a3daa901ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 31 06:49:19 compute-1 podman[80261]: 2026-01-31 06:49:19.085501749 +0000 UTC m=+0.284295661 container start cf4fd68c9900a31c2c6acc7fb26bcd1055e7cc672fc52b4825b34a3daa901ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 06:49:19 compute-1 podman[80261]: 2026-01-31 06:49:19.15203368 +0000 UTC m=+0.350827622 container attach cf4fd68c9900a31c2c6acc7fb26bcd1055e7cc672fc52b4825b34a3daa901ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 06:49:20 compute-1 stupefied_golick[80277]: [
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:     {
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:         "available": false,
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:         "ceph_device": false,
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:         "lsm_data": {},
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:         "lvs": [],
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:         "path": "/dev/sr0",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:         "rejected_reasons": [
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "Has a FileSystem",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "Insufficient space (<5GB)"
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:         ],
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:         "sys_api": {
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "actuators": null,
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "device_nodes": "sr0",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "devname": "sr0",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "human_readable_size": "482.00 KB",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "id_bus": "ata",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "model": "QEMU DVD-ROM",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "nr_requests": "2",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "parent": "/dev/sr0",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "partitions": {},
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "path": "/dev/sr0",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "removable": "1",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "rev": "2.5+",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "ro": "0",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "rotational": "1",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "sas_address": "",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "sas_device_handle": "",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "scheduler_mode": "mq-deadline",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "sectors": 0,
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "sectorsize": "2048",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "size": 493568.0,
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "support_discard": "2048",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "type": "disk",
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:             "vendor": "QEMU"
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:         }
Jan 31 06:49:20 compute-1 stupefied_golick[80277]:     }
Jan 31 06:49:20 compute-1 stupefied_golick[80277]: ]
Jan 31 06:49:20 compute-1 systemd[1]: libpod-cf4fd68c9900a31c2c6acc7fb26bcd1055e7cc672fc52b4825b34a3daa901ba8.scope: Deactivated successfully.
Jan 31 06:49:20 compute-1 systemd[1]: libpod-cf4fd68c9900a31c2c6acc7fb26bcd1055e7cc672fc52b4825b34a3daa901ba8.scope: Consumed 1.025s CPU time.
Jan 31 06:49:20 compute-1 podman[81331]: 2026-01-31 06:49:20.165137614 +0000 UTC m=+0.021299044 container died cf4fd68c9900a31c2c6acc7fb26bcd1055e7cc672fc52b4825b34a3daa901ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 06:49:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-2a7d093bf37614566559c73a0ab105de7153bc82a00ab1d1d12c8fedb487bcce-merged.mount: Deactivated successfully.
Jan 31 06:49:20 compute-1 podman[81331]: 2026-01-31 06:49:20.78795443 +0000 UTC m=+0.644115790 container remove cf4fd68c9900a31c2c6acc7fb26bcd1055e7cc672fc52b4825b34a3daa901ba8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_golick, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:49:20 compute-1 systemd[1]: libpod-conmon-cf4fd68c9900a31c2c6acc7fb26bcd1055e7cc672fc52b4825b34a3daa901ba8.scope: Deactivated successfully.
Jan 31 06:49:20 compute-1 sudo[80154]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:36 compute-1 ceph-osd[79145]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 1.767 iops: 452.441 elapsed_sec: 6.631
Jan 31 06:49:36 compute-1 ceph-osd[79145]: osd.1 0 waiting for initial osdmap
Jan 31 06:49:36 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1[79141]: 2026-01-31T06:49:36.033+0000 7f82e6b29640 -1 osd.1 0 waiting for initial osdmap
Jan 31 06:49:36 compute-1 ceph-osd[79145]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 347639.95 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 06:49:36 compute-1 ceph-osd[79145]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 06:49:36 compute-1 ceph-osd[79145]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 31 06:49:36 compute-1 ceph-osd[79145]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 06:49:36 compute-1 ceph-osd[79145]: osd.1 12 check_osdmap_features require_osd_release unknown -> reef
Jan 31 06:49:36 compute-1 ceph-osd[79145]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 06:49:36 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-osd-1[79141]: 2026-01-31T06:49:36.317+0000 7f82e2151640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 06:49:36 compute-1 ceph-osd[79145]: osd.1 12 set_numa_affinity not setting numa affinity
Jan 31 06:49:36 compute-1 ceph-osd[79145]: osd.1 12 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Jan 31 06:49:36 compute-1 ceph-osd[79145]: osd.1 13 state: booting -> active
Jan 31 06:49:36 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:36 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 13 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:37 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 14 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:37 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:45 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 21 pg[7.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:46 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:50 compute-1 sudo[81348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:49:50 compute-1 sudo[81348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:50 compute-1 sudo[81348]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:50 compute-1 sudo[81373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:49:50 compute-1 sudo[81373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:50 compute-1 sudo[81373]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:51 compute-1 sudo[81398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:49:51 compute-1 sudo[81398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:51 compute-1 sudo[81398]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:51 compute-1 sudo[81423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 06:49:51 compute-1 sudo[81423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:49:51 compute-1 podman[81488]: 2026-01-31 06:49:51.406213907 +0000 UTC m=+0.040668785 container create 00002061c7600bce00d8a4df7d3249694daaa0e93935f99ef1db452470fc272e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sammet, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:49:51 compute-1 systemd[1]: Started libpod-conmon-00002061c7600bce00d8a4df7d3249694daaa0e93935f99ef1db452470fc272e.scope.
Jan 31 06:49:51 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:49:51 compute-1 podman[81488]: 2026-01-31 06:49:51.383404325 +0000 UTC m=+0.017859233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:49:51 compute-1 podman[81488]: 2026-01-31 06:49:51.485026347 +0000 UTC m=+0.119481255 container init 00002061c7600bce00d8a4df7d3249694daaa0e93935f99ef1db452470fc272e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 06:49:51 compute-1 podman[81488]: 2026-01-31 06:49:51.491843437 +0000 UTC m=+0.126298315 container start 00002061c7600bce00d8a4df7d3249694daaa0e93935f99ef1db452470fc272e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 06:49:51 compute-1 podman[81488]: 2026-01-31 06:49:51.497035064 +0000 UTC m=+0.131489942 container attach 00002061c7600bce00d8a4df7d3249694daaa0e93935f99ef1db452470fc272e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sammet, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:49:51 compute-1 mystifying_sammet[81506]: 167 167
Jan 31 06:49:51 compute-1 systemd[1]: libpod-00002061c7600bce00d8a4df7d3249694daaa0e93935f99ef1db452470fc272e.scope: Deactivated successfully.
Jan 31 06:49:51 compute-1 conmon[81506]: conmon 00002061c7600bce00d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-00002061c7600bce00d8a4df7d3249694daaa0e93935f99ef1db452470fc272e.scope/container/memory.events
Jan 31 06:49:51 compute-1 podman[81488]: 2026-01-31 06:49:51.499315244 +0000 UTC m=+0.133770122 container died 00002061c7600bce00d8a4df7d3249694daaa0e93935f99ef1db452470fc272e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sammet, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:49:51 compute-1 systemd[1]: var-lib-containers-storage-overlay-5677ddde564066b55e1aad504635aa7886d4ced1e09a3214f52fa5c3b36f0211-merged.mount: Deactivated successfully.
Jan 31 06:49:51 compute-1 podman[81488]: 2026-01-31 06:49:51.551908453 +0000 UTC m=+0.186363331 container remove 00002061c7600bce00d8a4df7d3249694daaa0e93935f99ef1db452470fc272e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sammet, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:49:51 compute-1 systemd[1]: libpod-conmon-00002061c7600bce00d8a4df7d3249694daaa0e93935f99ef1db452470fc272e.scope: Deactivated successfully.
Jan 31 06:49:51 compute-1 podman[81525]: 2026-01-31 06:49:51.620776921 +0000 UTC m=+0.041261570 container create 4874b955f661f8982b649f8d92ed2ad09cfb9962a8870f14cd877dfa651ac27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 06:49:51 compute-1 systemd[1]: Started libpod-conmon-4874b955f661f8982b649f8d92ed2ad09cfb9962a8870f14cd877dfa651ac27f.scope.
Jan 31 06:49:51 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:49:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65102b156c0b577874377e0de15bb8c8c736051e80c0c8b9dead88bb7de67522/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65102b156c0b577874377e0de15bb8c8c736051e80c0c8b9dead88bb7de67522/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65102b156c0b577874377e0de15bb8c8c736051e80c0c8b9dead88bb7de67522/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65102b156c0b577874377e0de15bb8c8c736051e80c0c8b9dead88bb7de67522/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:51 compute-1 podman[81525]: 2026-01-31 06:49:51.602644152 +0000 UTC m=+0.023128821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:49:51 compute-1 podman[81525]: 2026-01-31 06:49:51.724401186 +0000 UTC m=+0.144885855 container init 4874b955f661f8982b649f8d92ed2ad09cfb9962a8870f14cd877dfa651ac27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:49:51 compute-1 podman[81525]: 2026-01-31 06:49:51.730898278 +0000 UTC m=+0.151382927 container start 4874b955f661f8982b649f8d92ed2ad09cfb9962a8870f14cd877dfa651ac27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mclaren, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 06:49:51 compute-1 podman[81525]: 2026-01-31 06:49:51.759202655 +0000 UTC m=+0.179687304 container attach 4874b955f661f8982b649f8d92ed2ad09cfb9962a8870f14cd877dfa651ac27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 06:49:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 26 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=26 pruub=9.315290451s) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active pruub 51.920871735s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:49:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 26 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=26 pruub=9.315290451s) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown pruub 51.920871735s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:52 compute-1 systemd[1]: libpod-4874b955f661f8982b649f8d92ed2ad09cfb9962a8870f14cd877dfa651ac27f.scope: Deactivated successfully.
Jan 31 06:49:52 compute-1 podman[81525]: 2026-01-31 06:49:52.336872985 +0000 UTC m=+0.757357634 container died 4874b955f661f8982b649f8d92ed2ad09cfb9962a8870f14cd877dfa651ac27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mclaren, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 06:49:52 compute-1 systemd[1]: var-lib-containers-storage-overlay-65102b156c0b577874377e0de15bb8c8c736051e80c0c8b9dead88bb7de67522-merged.mount: Deactivated successfully.
Jan 31 06:49:52 compute-1 podman[81525]: 2026-01-31 06:49:52.458592337 +0000 UTC m=+0.879076986 container remove 4874b955f661f8982b649f8d92ed2ad09cfb9962a8870f14cd877dfa651ac27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:49:52 compute-1 systemd[1]: libpod-conmon-4874b955f661f8982b649f8d92ed2ad09cfb9962a8870f14cd877dfa651ac27f.scope: Deactivated successfully.
Jan 31 06:49:52 compute-1 systemd[1]: Reloading.
Jan 31 06:49:52 compute-1 systemd-rc-local-generator[81605]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:49:52 compute-1 systemd-sysv-generator[81613]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:49:52 compute-1 systemd[1]: Reloading.
Jan 31 06:49:52 compute-1 systemd-rc-local-generator[81650]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:49:52 compute-1 systemd-sysv-generator[81654]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:49:52 compute-1 systemd[1]: Starting Ceph mon.compute-1 for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a...
Jan 31 06:49:53 compute-1 podman[81709]: 2026-01-31 06:49:53.211112912 +0000 UTC m=+0.037866001 container create 07192c2211e5bc23ba05da8f385f7bbb22d1eb714d391798a5a81bcce20712f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:49:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb16b10b4a53fe9be5ca3c519da94810d13c8f81a068a73dd2bbd21e416c8431/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb16b10b4a53fe9be5ca3c519da94810d13c8f81a068a73dd2bbd21e416c8431/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb16b10b4a53fe9be5ca3c519da94810d13c8f81a068a73dd2bbd21e416c8431/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb16b10b4a53fe9be5ca3c519da94810d13c8f81a068a73dd2bbd21e416c8431/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 06:49:53 compute-1 podman[81709]: 2026-01-31 06:49:53.263950607 +0000 UTC m=+0.090703706 container init 07192c2211e5bc23ba05da8f385f7bbb22d1eb714d391798a5a81bcce20712f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 06:49:53 compute-1 podman[81709]: 2026-01-31 06:49:53.271764053 +0000 UTC m=+0.098517142 container start 07192c2211e5bc23ba05da8f385f7bbb22d1eb714d391798a5a81bcce20712f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mon-compute-1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 06:49:53 compute-1 bash[81709]: 07192c2211e5bc23ba05da8f385f7bbb22d1eb714d391798a5a81bcce20712f3
Jan 31 06:49:53 compute-1 podman[81709]: 2026-01-31 06:49:53.195665744 +0000 UTC m=+0.022418833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:49:53 compute-1 systemd[1]: Started Ceph mon.compute-1 for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a.
Jan 31 06:49:53 compute-1 ceph-mon[81728]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 06:49:53 compute-1 ceph-mon[81728]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 31 06:49:53 compute-1 ceph-mon[81728]: pidfile_write: ignore empty --pid-file
Jan 31 06:49:53 compute-1 ceph-mon[81728]: load: jerasure load: lrc 
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: RocksDB version: 7.9.2
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Git sha 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: DB SUMMARY
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: DB Session ID:  HV1COZEZD0ZIIOS10G8C
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: CURRENT file:  CURRENT
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                         Options.error_if_exists: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                       Options.create_if_missing: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                                     Options.env: 0x55c9bd44fc40
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                                Options.info_log: 0x55c9bf18afc0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                              Options.statistics: (nil)
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                               Options.use_fsync: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                              Options.db_log_dir: 
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                                 Options.wal_dir: 
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                    Options.write_buffer_manager: 0x55c9bf19ab40
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                  Options.unordered_write: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                               Options.row_cache: None
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                              Options.wal_filter: None
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.two_write_queues: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.wal_compression: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.atomic_flush: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.max_background_jobs: 2
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.max_background_compactions: -1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.max_subcompactions: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.max_total_wal_size: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                          Options.max_open_files: -1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:       Options.compaction_readahead_size: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Compression algorithms supported:
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         kZSTD supported: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         kXpressCompression supported: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         kBZip2Compression supported: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         kLZ4Compression supported: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         kZlibCompression supported: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         kSnappyCompression supported: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:           Options.merge_operator: 
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:        Options.compaction_filter: None
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c9bf18ac00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c9bf1831f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:        Options.write_buffer_size: 33554432
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:  Options.max_write_buffer_number: 2
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:          Options.compression: NoCompression
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.num_levels: 7
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                           Options.bloom_locality: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                               Options.ttl: 2592000
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                       Options.enable_blob_files: false
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                           Options.min_blob_size: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842193313607, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842193325512, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842193, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842193325693, "job": 1, "event": "recovery_finished"}
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 31 06:49:53 compute-1 sudo[81423]: pam_unix(sudo:session): session closed for user root
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c9bf1ace00
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: DB pointer 0x55c9bf236000
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:49:53 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9bf1831f0#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 06:49:53 compute-1 ceph-mon[81728]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Jan 31 06:49:53 compute-1 ceph-mon[81728]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(???) e0 preinit fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).mds e1 new map
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 0 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 0 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 1 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 1 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 1 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e15 e15: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e16 e16: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e17 e17: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e18 e18: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e19 e19: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e20 e20: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e21 e21: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e22 e22: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e23 e23: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e24 e24: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e25 e25: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e26 e26: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e26 crush map has features 3314933000852226048, adjusting msgr requires
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/234462672' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 06:49:53 compute-1 ceph-mon[81728]: osd.1 [v2:192.168.122.101:6800/2634111835,v1:192.168.122.101:6801/2634111835] boot
Jan 31 06:49:53 compute-1 ceph-mon[81728]: osdmap e13: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/814170302' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 06:49:53 compute-1 ceph-mon[81728]: pgmap v60: 2 pgs: 2 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 31 06:49:53 compute-1 ceph-mon[81728]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/814170302' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 06:49:53 compute-1 ceph-mon[81728]: osdmap e14: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/1172611597' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/1172611597' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 06:49:53 compute-1 ceph-mon[81728]: osdmap e15: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 06:49:53 compute-1 ceph-mon[81728]: pgmap v63: 4 pgs: 1 unknown, 3 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 31 06:49:53 compute-1 ceph-mon[81728]: osdmap e16: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/3485467537' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/3485467537' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 06:49:53 compute-1 ceph-mon[81728]: osdmap e17: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mgrmap e9: compute-0.gghdjs(active, since 112s)
Jan 31 06:49:53 compute-1 ceph-mon[81728]: osdmap e18: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: pgmap v66: 5 pgs: 2 unknown, 3 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/2632076392' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/2632076392' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 06:49:53 compute-1 ceph-mon[81728]: osdmap e19: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/718073642' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 06:49:53 compute-1 ceph-mon[81728]: osdmap e20: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: pgmap v69: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/718073642' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 06:49:53 compute-1 ceph-mon[81728]: osdmap e21: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 06:49:53 compute-1 ceph-mon[81728]: Updating compute-2:/etc/ceph/ceph.conf
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/419343111' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 31 06:49:53 compute-1 ceph-mon[81728]: pgmap v72: 7 pgs: 1 unknown, 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:49:53 compute-1 ceph-mon[81728]: Updating compute-2:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/419343111' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 31 06:49:53 compute-1 ceph-mon[81728]: osdmap e22: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/19195425' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 31 06:49:53 compute-1 ceph-mon[81728]: osdmap e23: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/19195425' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 31 06:49:53 compute-1 ceph-mon[81728]: pgmap v75: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:49:53 compute-1 ceph-mon[81728]: Updating compute-2:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.client.admin.keyring
Jan 31 06:49:53 compute-1 ceph-mon[81728]: osdmap e24: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:49:53 compute-1 ceph-mon[81728]: pgmap v77: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:49:53 compute-1 ceph-mon[81728]: Deploying daemon mon.compute-2 on compute-2
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/462523218' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 31 06:49:53 compute-1 ceph-mon[81728]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 06:49:53 compute-1 ceph-mon[81728]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/462523218' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 31 06:49:53 compute-1 ceph-mon[81728]: osdmap e25: 2 total, 2 up, 2 in
Jan 31 06:49:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e26 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e26 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).osd e26 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 06:49:53 compute-1 ceph-mon[81728]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.1d( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.1e( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.1c( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.1f( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.1b( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.a( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.9( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.8( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.6( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.7( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.4( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.2( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.1( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.5( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.b( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.3( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.c( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.d( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.e( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.11( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.f( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.10( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.12( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.13( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.14( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.15( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.16( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.17( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.19( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.18( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.1a( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.1b( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.1c( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.1d( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.1f( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.1e( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.9( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.8( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.a( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.6( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.7( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.2( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.4( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.0( empty local-lis/les=26/27 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.5( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.b( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.c( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.1( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.3( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.d( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.e( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.11( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.f( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.12( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.13( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.15( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.14( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.16( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.17( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.19( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.10( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.18( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 27 pg[2.1a( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [1] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:49:57 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.1 deep-scrub starts
Jan 31 06:49:57 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.1 deep-scrub ok
Jan 31 06:49:59 compute-1 ceph-mon[81728]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Jan 31 06:49:59 compute-1 ceph-mon[81728]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Jan 31 06:49:59 compute-1 ceph-mon[81728]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 31 06:49:59 compute-1 ceph-mon[81728]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 06:50:01 compute-1 anacron[7266]: Job `cron.weekly' started
Jan 31 06:50:01 compute-1 anacron[7266]: Job `cron.weekly' terminated
Jan 31 06:50:02 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 31 06:50:02 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 31 06:50:02 compute-1 ceph-mon[81728]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 06:50:04 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 31 06:50:04 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Jan 31 06:50:04 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 31 06:50:04 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e27 e27: 2 total, 2 up, 2 in
Jan 31 06:50:04 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 31 06:50:04 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e28 e28: 2 total, 2 up, 2 in
Jan 31 06:50:04 compute-1 ceph-mon[81728]: Deploying daemon mon.compute-1 on compute-1
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: mon.compute-0 calling monitor election
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: pgmap v81: 38 pgs: 31 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: mon.compute-2 calling monitor election
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: pgmap v82: 38 pgs: 1 peering, 31 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 31 06:50:04 compute-1 ceph-mon[81728]: monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 06:50:04 compute-1 ceph-mon[81728]: fsmap 
Jan 31 06:50:04 compute-1 ceph-mon[81728]: osdmap e26: 2 total, 2 up, 2 in
Jan 31 06:50:04 compute-1 ceph-mon[81728]: mgrmap e9: compute-0.gghdjs(active, since 2m)
Jan 31 06:50:04 compute-1 ceph-mon[81728]: Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled
Jan 31 06:50:04 compute-1 ceph-mon[81728]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Jan 31 06:50:04 compute-1 ceph-mon[81728]:     application not enabled on pool 'backups'
Jan 31 06:50:04 compute-1 ceph-mon[81728]:     application not enabled on pool 'images'
Jan 31 06:50:04 compute-1 ceph-mon[81728]:     application not enabled on pool 'cephfs.cephfs.meta'
Jan 31 06:50:04 compute-1 ceph-mon[81728]:     application not enabled on pool 'cephfs.cephfs.data'
Jan 31 06:50:04 compute-1 ceph-mon[81728]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:04 compute-1 ceph-mon[81728]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 06:50:04 compute-1 ceph-mon[81728]: osdmap e27: 2 total, 2 up, 2 in
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.iujpur", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.iujpur", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: Deploying daemon mgr.compute-2.iujpur on compute-2
Jan 31 06:50:04 compute-1 ceph-mon[81728]: pgmap v84: 69 pgs: 1 peering, 62 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Jan 31 06:50:04 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 06:50:04 compute-1 ceph-mon[81728]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,created_at=2026-01-31T06:49:51.792394Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864292,os=Linux}
Jan 31 06:50:04 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e29 e29: 2 total, 2 up, 2 in
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/1804571096' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: mon.compute-0 calling monitor election
Jan 31 06:50:04 compute-1 ceph-mon[81728]: mon.compute-2 calling monitor election
Jan 31 06:50:04 compute-1 ceph-mon[81728]: 3.1 scrub starts
Jan 31 06:50:04 compute-1 ceph-mon[81728]: 3.1 scrub ok
Jan 31 06:50:04 compute-1 ceph-mon[81728]: pgmap v86: 100 pgs: 62 unknown, 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: mon.compute-1 calling monitor election
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: 3.2 scrub starts
Jan 31 06:50:04 compute-1 ceph-mon[81728]: 3.2 scrub ok
Jan 31 06:50:04 compute-1 ceph-mon[81728]: pgmap v87: 100 pgs: 1 peering, 31 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: 3.3 scrub starts
Jan 31 06:50:04 compute-1 ceph-mon[81728]: 3.3 scrub ok
Jan 31 06:50:04 compute-1 ceph-mon[81728]: 2.2 scrub starts
Jan 31 06:50:04 compute-1 ceph-mon[81728]: 2.2 scrub ok
Jan 31 06:50:04 compute-1 ceph-mon[81728]: pgmap v88: 100 pgs: 1 peering, 31 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: 3.4 deep-scrub starts
Jan 31 06:50:04 compute-1 ceph-mon[81728]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 06:50:04 compute-1 ceph-mon[81728]: 3.4 deep-scrub ok
Jan 31 06:50:04 compute-1 ceph-mon[81728]: 2.3 scrub starts
Jan 31 06:50:04 compute-1 ceph-mon[81728]: 2.3 scrub ok
Jan 31 06:50:04 compute-1 ceph-mon[81728]: pgmap v89: 100 pgs: 1 peering, 31 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 06:50:04 compute-1 ceph-mon[81728]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 06:50:04 compute-1 ceph-mon[81728]: fsmap 
Jan 31 06:50:04 compute-1 ceph-mon[81728]: osdmap e28: 2 total, 2 up, 2 in
Jan 31 06:50:04 compute-1 ceph-mon[81728]: mgrmap e9: compute-0.gghdjs(active, since 2m)
Jan 31 06:50:04 compute-1 ceph-mon[81728]: Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled
Jan 31 06:50:04 compute-1 ceph-mon[81728]: [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled
Jan 31 06:50:04 compute-1 ceph-mon[81728]:     application not enabled on pool 'cephfs.cephfs.meta'
Jan 31 06:50:04 compute-1 ceph-mon[81728]:     application not enabled on pool 'cephfs.cephfs.data'
Jan 31 06:50:04 compute-1 ceph-mon[81728]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 31 06:50:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:05 compute-1 sudo[81769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:05 compute-1 sudo[81769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:05 compute-1 sudo[81769]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:05 compute-1 sudo[81794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:50:05 compute-1 sudo[81794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:05 compute-1 sudo[81794]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:05 compute-1 sudo[81819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:05 compute-1 sudo[81819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:05 compute-1 sudo[81819]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:05 compute-1 sudo[81844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 06:50:05 compute-1 sudo[81844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:05 compute-1 podman[81910]: 2026-01-31 06:50:05.488205899 +0000 UTC m=+0.054499669 container create ff5af9d8136d006f9deb282765014aad59253d1d77bd837fb6052f112d7e53ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noether, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 06:50:05 compute-1 systemd[1]: Started libpod-conmon-ff5af9d8136d006f9deb282765014aad59253d1d77bd837fb6052f112d7e53ec.scope.
Jan 31 06:50:05 compute-1 podman[81910]: 2026-01-31 06:50:05.452370784 +0000 UTC m=+0.018664584 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:50:05 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:50:05 compute-1 podman[81910]: 2026-01-31 06:50:05.622280649 +0000 UTC m=+0.188574449 container init ff5af9d8136d006f9deb282765014aad59253d1d77bd837fb6052f112d7e53ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 06:50:05 compute-1 podman[81910]: 2026-01-31 06:50:05.630882946 +0000 UTC m=+0.197176716 container start ff5af9d8136d006f9deb282765014aad59253d1d77bd837fb6052f112d7e53ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noether, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:50:05 compute-1 gallant_noether[81926]: 167 167
Jan 31 06:50:05 compute-1 systemd[1]: libpod-ff5af9d8136d006f9deb282765014aad59253d1d77bd837fb6052f112d7e53ec.scope: Deactivated successfully.
Jan 31 06:50:05 compute-1 podman[81910]: 2026-01-31 06:50:05.647744871 +0000 UTC m=+0.214038661 container attach ff5af9d8136d006f9deb282765014aad59253d1d77bd837fb6052f112d7e53ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noether, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 06:50:05 compute-1 podman[81910]: 2026-01-31 06:50:05.648543842 +0000 UTC m=+0.214837612 container died ff5af9d8136d006f9deb282765014aad59253d1d77bd837fb6052f112d7e53ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noether, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:50:05 compute-1 systemd[1]: var-lib-containers-storage-overlay-e670b9f3ec1a87758a84eaccac960c9252f632f5a63ffcc649db5d7721787e5e-merged.mount: Deactivated successfully.
Jan 31 06:50:05 compute-1 podman[81910]: 2026-01-31 06:50:05.794417723 +0000 UTC m=+0.360711503 container remove ff5af9d8136d006f9deb282765014aad59253d1d77bd837fb6052f112d7e53ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:50:05 compute-1 systemd[1]: libpod-conmon-ff5af9d8136d006f9deb282765014aad59253d1d77bd837fb6052f112d7e53ec.scope: Deactivated successfully.
Jan 31 06:50:05 compute-1 systemd[1]: Reloading.
Jan 31 06:50:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 06:50:05 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/1804571096' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 31 06:50:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 06:50:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 06:50:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 06:50:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 06:50:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:05 compute-1 ceph-mon[81728]: osdmap e29: 2 total, 2 up, 2 in
Jan 31 06:50:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 06:50:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hglnzn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 06:50:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hglnzn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 06:50:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 06:50:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:50:05 compute-1 ceph-mon[81728]: Deploying daemon mgr.compute-1.hglnzn on compute-1
Jan 31 06:50:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 06:50:05 compute-1 systemd-sysv-generator[81974]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:50:05 compute-1 systemd-rc-local-generator[81969]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:50:05 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e30 e30: 2 total, 2 up, 2 in
Jan 31 06:50:06 compute-1 systemd[1]: Reloading.
Jan 31 06:50:06 compute-1 systemd-sysv-generator[82015]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:50:06 compute-1 systemd-rc-local-generator[82010]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:50:06 compute-1 systemd[1]: Starting Ceph mgr.compute-1.hglnzn for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a...
Jan 31 06:50:06 compute-1 podman[82069]: 2026-01-31 06:50:06.566635698 +0000 UTC m=+0.043909110 container create eb04de431f696300d281db8c1f556e3be7d7fbf92801d52584d1d6df44ec6d55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:50:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4869c9aca5eb8eecfde19efc2edda3af9d48de9a88bb24dbbd7608ee085ed3c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:50:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4869c9aca5eb8eecfde19efc2edda3af9d48de9a88bb24dbbd7608ee085ed3c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:50:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4869c9aca5eb8eecfde19efc2edda3af9d48de9a88bb24dbbd7608ee085ed3c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:50:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4869c9aca5eb8eecfde19efc2edda3af9d48de9a88bb24dbbd7608ee085ed3c3/merged/var/lib/ceph/mgr/ceph-compute-1.hglnzn supports timestamps until 2038 (0x7fffffff)
Jan 31 06:50:06 compute-1 podman[82069]: 2026-01-31 06:50:06.544034241 +0000 UTC m=+0.021307673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:50:06 compute-1 podman[82069]: 2026-01-31 06:50:06.645951322 +0000 UTC m=+0.123224754 container init eb04de431f696300d281db8c1f556e3be7d7fbf92801d52584d1d6df44ec6d55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 06:50:06 compute-1 podman[82069]: 2026-01-31 06:50:06.650673946 +0000 UTC m=+0.127947358 container start eb04de431f696300d281db8c1f556e3be7d7fbf92801d52584d1d6df44ec6d55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:50:06 compute-1 ceph-mgr[82088]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 06:50:06 compute-1 ceph-mgr[82088]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 31 06:50:06 compute-1 ceph-mgr[82088]: pidfile_write: ignore empty --pid-file
Jan 31 06:50:06 compute-1 bash[82069]: eb04de431f696300d281db8c1f556e3be7d7fbf92801d52584d1d6df44ec6d55
Jan 31 06:50:06 compute-1 systemd[1]: Started Ceph mgr.compute-1.hglnzn for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a.
Jan 31 06:50:06 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'alerts'
Jan 31 06:50:06 compute-1 sudo[81844]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:06 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/803331295' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 31 06:50:06 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 31 06:50:06 compute-1 ceph-mon[81728]: osdmap e30: 2 total, 2 up, 2 in
Jan 31 06:50:06 compute-1 ceph-mon[81728]: 3.5 scrub starts
Jan 31 06:50:06 compute-1 ceph-mon[81728]: 3.5 scrub ok
Jan 31 06:50:06 compute-1 ceph-mon[81728]: pgmap v92: 131 pgs: 1 peering, 62 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:06 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 06:50:06 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 06:50:06 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:06 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:06 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:07 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e31 e31: 2 total, 2 up, 2 in
Jan 31 06:50:07 compute-1 ceph-mgr[82088]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 31 06:50:07 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'balancer'
Jan 31 06:50:07 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:07.162+0000 7f2a62f7f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 31 06:50:07 compute-1 ceph-mgr[82088]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 31 06:50:07 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:07.459+0000 7f2a62f7f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 31 06:50:07 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'cephadm'
Jan 31 06:50:08 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:08 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 06:50:08 compute-1 ceph-mon[81728]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 06:50:08 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/803331295' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 31 06:50:08 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 06:50:08 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 06:50:08 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 06:50:08 compute-1 ceph-mon[81728]: osdmap e31: 2 total, 2 up, 2 in
Jan 31 06:50:08 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:50:08 compute-1 ceph-mon[81728]: Deploying daemon crash.compute-2 on compute-2
Jan 31 06:50:08 compute-1 ceph-mon[81728]: 3.6 scrub starts
Jan 31 06:50:08 compute-1 ceph-mon[81728]: 3.6 scrub ok
Jan 31 06:50:08 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e32 e32: 2 total, 2 up, 2 in
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 31 pg[7.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=31 pruub=9.884221077s) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active pruub 68.602577209s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=31 pruub=9.884221077s) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown pruub 68.602577209s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.1( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.2( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.3( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.4( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.a( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.b( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.c( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.d( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.10( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.11( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.e( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.f( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.14( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.15( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.12( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.13( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.16( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.17( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.1a( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.18( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.1b( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.19( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.1e( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.1f( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.1c( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.1d( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.9( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.7( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.6( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.5( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 32 pg[7.8( empty local-lis/les=21/22 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:08 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e32 _set_new_cache_sizes cache_size:1019929561 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:50:09 compute-1 ceph-mon[81728]: osdmap e32: 2 total, 2 up, 2 in
Jan 31 06:50:09 compute-1 ceph-mon[81728]: pgmap v95: 193 pgs: 1 peering, 124 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:09 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:09 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:09 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:09 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:09 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 06:50:09 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 06:50:09 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:50:09 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 06:50:09 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:50:09 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e33 e33: 2 total, 2 up, 2 in
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.1c( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.1d( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.12( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.1f( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.10( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.11( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.16( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.13( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.14( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.15( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.b( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.8( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.a( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.e( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.6( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.9( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.0( empty local-lis/les=31/33 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.17( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.1( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.7( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.3( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.d( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.2( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.c( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.f( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.1e( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.19( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.18( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.5( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.1b( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.1a( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 33 pg[7.4( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=21/21 les/c/f=22/22/0 sis=31) [1] r=0 lpr=31 pi=[21,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:09 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'crash'
Jan 31 06:50:10 compute-1 ceph-mgr[82088]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 31 06:50:10 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'dashboard'
Jan 31 06:50:10 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:10.030+0000 7f2a62f7f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 31 06:50:10 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.4 deep-scrub starts
Jan 31 06:50:10 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.4 deep-scrub ok
Jan 31 06:50:10 compute-1 ceph-mon[81728]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 31 06:50:10 compute-1 ceph-mon[81728]: Cluster is now healthy
Jan 31 06:50:10 compute-1 ceph-mon[81728]: osdmap e33: 2 total, 2 up, 2 in
Jan 31 06:50:10 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:11 compute-1 ceph-mon[81728]: 2.4 deep-scrub starts
Jan 31 06:50:11 compute-1 ceph-mon[81728]: 2.4 deep-scrub ok
Jan 31 06:50:11 compute-1 ceph-mon[81728]: 3.7 scrub starts
Jan 31 06:50:11 compute-1 ceph-mon[81728]: 3.7 scrub ok
Jan 31 06:50:11 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/4171748591' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 06:50:11 compute-1 ceph-mon[81728]: pgmap v97: 193 pgs: 25 activating, 31 unknown, 137 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:11 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/4171748591' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 06:50:11 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'devicehealth'
Jan 31 06:50:11 compute-1 ceph-mgr[82088]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 31 06:50:11 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 06:50:11 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:11.880+0000 7f2a62f7f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 31 06:50:12 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e34 e34: 3 total, 2 up, 3 in
Jan 31 06:50:12 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 06:50:12 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 06:50:12 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]:   from numpy import show_config as show_numpy_config
Jan 31 06:50:12 compute-1 ceph-mgr[82088]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 31 06:50:12 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:12.494+0000 7f2a62f7f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 31 06:50:12 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'influx'
Jan 31 06:50:12 compute-1 ceph-mgr[82088]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 31 06:50:12 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'insights'
Jan 31 06:50:12 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:12.773+0000 7f2a62f7f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 31 06:50:13 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'iostat'
Jan 31 06:50:13 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/4277762610' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c89cb65f-6fb8-418d-9343-39d375c50eea"}]: dispatch
Jan 31 06:50:13 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/2722832315' entity='client.admin' 
Jan 31 06:50:13 compute-1 ceph-mon[81728]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c89cb65f-6fb8-418d-9343-39d375c50eea"}]: dispatch
Jan 31 06:50:13 compute-1 ceph-mon[81728]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c89cb65f-6fb8-418d-9343-39d375c50eea"}]': finished
Jan 31 06:50:13 compute-1 ceph-mon[81728]: osdmap e34: 3 total, 2 up, 3 in
Jan 31 06:50:13 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:13 compute-1 ceph-mon[81728]: pgmap v99: 193 pgs: 25 activating, 31 unknown, 137 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:13 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/1100119358' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 31 06:50:13 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.5 deep-scrub starts
Jan 31 06:50:13 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.5 deep-scrub ok
Jan 31 06:50:13 compute-1 ceph-mgr[82088]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 31 06:50:13 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'k8sevents'
Jan 31 06:50:13 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:13.323+0000 7f2a62f7f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 31 06:50:13 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e34 _set_new_cache_sizes cache_size:1020053135 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:50:14 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 31 06:50:14 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 31 06:50:14 compute-1 ceph-mon[81728]: 2.5 deep-scrub starts
Jan 31 06:50:14 compute-1 ceph-mon[81728]: 2.5 deep-scrub ok
Jan 31 06:50:14 compute-1 ceph-mon[81728]: from='client.14259 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:50:14 compute-1 ceph-mon[81728]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 06:50:14 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:14 compute-1 ceph-mon[81728]: Saving service ingress.rgw.default spec with placement count:2
Jan 31 06:50:14 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:14 compute-1 ceph-mon[81728]: 3.8 deep-scrub starts
Jan 31 06:50:14 compute-1 ceph-mon[81728]: 3.8 deep-scrub ok
Jan 31 06:50:14 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:14 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:15 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'localpool'
Jan 31 06:50:15 compute-1 ceph-mon[81728]: 2.6 scrub starts
Jan 31 06:50:15 compute-1 ceph-mon[81728]: 2.6 scrub ok
Jan 31 06:50:15 compute-1 ceph-mon[81728]: 3.9 scrub starts
Jan 31 06:50:15 compute-1 ceph-mon[81728]: 3.9 scrub ok
Jan 31 06:50:15 compute-1 ceph-mon[81728]: pgmap v100: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 06:50:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 06:50:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 06:50:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 06:50:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 06:50:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 06:50:15 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e35 e35: 3 total, 2 up, 3 in
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[5.18( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[4.18( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[6.1a( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[3.1c( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[6.19( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[4.1b( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[5.1a( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[3.1d( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[5.1b( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[4.1a( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[3.1a( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[5.1c( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[4.c( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[6.e( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[6.d( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[5.e( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[4.e( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[3.9( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[5.f( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[4.1( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[6.3( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[5.1( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[6.2( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[3.5( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[3.3( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[6.5( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[5.2( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[5.7( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[4.5( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[5.4( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[6.7( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[4.d( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[3.a( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[4.a( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[6.8( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[3.d( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[3.c( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[4.8( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[6.a( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[3.f( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[5.9( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[4.9( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[3.e( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[3.11( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[3.10( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[5.16( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[6.15( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[3.13( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[5.15( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[4.15( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[6.17( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[3.15( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[3.14( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[4.13( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[6.12( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[5.11( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[3.16( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[5.10( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[5.1f( empty local-lis/les=0/0 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[6.1c( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[4.1f( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[6.1e( empty local-lis/les=0/0 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.19( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.970373154s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active pruub 78.560714722s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.19( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.970330238s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.560714722s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.1d( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.969848633s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 75.560348511s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.1d( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.969820976s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.560348511s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.13( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.970213890s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 75.560852051s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.13( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.970193863s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.560852051s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.15( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.969930649s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active pruub 78.560668945s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.15( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.969915390s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.560668945s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.10( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.969838142s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 75.560661316s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.10( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.969805717s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.560661316s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.13( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.969668388s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active pruub 78.560638428s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.13( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.969645500s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.560638428s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.14( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.969872475s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 75.560974121s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.14( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.969855309s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.560974121s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.10( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.969508171s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active pruub 78.560722351s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.10( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.969493866s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.560722351s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.a( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.969902992s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 75.561233521s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.a( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.969878197s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.561233521s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.e( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.969151497s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active pruub 78.560569763s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.e( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.969137192s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.560569763s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.b( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.969718933s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 75.561210632s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.b( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.969702721s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.561210632s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.d( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.968824387s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active pruub 78.560409546s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.d( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.968802452s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.560409546s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.8( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.969557762s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 75.561233521s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.8( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.969542503s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.561233521s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.c( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.968582153s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active pruub 78.560340881s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.c( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.968565941s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.560340881s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.9( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.969371796s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 75.561233521s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.9( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.969356537s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.561233521s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.e( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.969297409s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 75.561271667s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.e( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.969268799s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.561271667s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.6( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.969145775s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 75.561279297s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.6( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.969106674s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.561279297s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.1( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.968083382s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active pruub 78.560409546s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.1( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.968055725s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.560409546s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.4( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.970221519s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 75.562644958s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.4( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.970208168s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.562644958s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.4( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.967659950s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active pruub 78.560188293s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.4( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.967643738s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.560188293s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.6( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.967543602s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active pruub 78.560165405s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.6( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.967527390s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.560165405s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.3( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.968820572s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 75.561538696s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.3( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.968799591s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.561538696s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.2( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.968720436s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 75.561538696s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.2( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.968705177s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.561538696s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.9( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.967187881s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active pruub 78.560127258s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.9( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.967171669s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.560127258s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.a( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.967059135s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active pruub 78.560096741s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.a( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.967041969s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.560096741s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.f( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.968688011s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 75.561820984s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.f( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.968657494s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.561820984s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.1e( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.968745232s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 75.561988831s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.1e( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.968729973s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.561988831s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.1b( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.949427605s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active pruub 78.542762756s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.1b( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.949411392s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.542762756s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.18( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.968550682s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 75.562004089s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.18( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.968537331s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.562004089s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.1b( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.968667030s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active pruub 75.562202454s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[7.1b( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=35 pruub=9.968642235s) [0] r=-1 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.562202454s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.1e( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.966333389s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active pruub 78.559997559s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.1e( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.966317177s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.559997559s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.1f( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.966332436s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 active pruub 78.560089111s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 35 pg[2.1f( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=35 pruub=12.966318130s) [0] r=-1 lpr=35 pi=[26,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.560089111s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:15 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 06:50:15 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).mds e2 new map
Jan 31 06:50:15 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T06:50:15.676838+0000
                                           modified        2026-01-31T06:50:15.676874+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Jan 31 06:50:15 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e36 e36: 3 total, 2 up, 3 in
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[4.18( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[6.1a( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[5.18( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[5.1b( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[6.19( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[5.1a( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[4.1a( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[3.1a( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[5.1c( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[6.e( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[3.1c( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[4.c( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[4.1b( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[6.d( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[4.e( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[5.e( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[3.9( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[5.1( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[5.f( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[4.1( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[6.3( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[6.2( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[3.3( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[5.7( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[5.2( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[6.7( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[4.5( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[4.d( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[5.4( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[3.5( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[4.a( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[6.5( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[3.a( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[3.c( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[6.8( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[4.8( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[3.f( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[5.9( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[6.a( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[3.e( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[3.11( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[3.d( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[6.15( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[5.15( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[3.13( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[5.16( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[4.9( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[4.15( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[6.17( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[3.15( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[4.13( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[5.11( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[3.10( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[3.16( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[5.10( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[4.1f( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=35) [1] r=0 lpr=35 pi=[28,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[6.1c( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[3.14( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[5.1f( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[6.12( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[6.1e( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=31/31 les/c/f=32/32/0 sis=35) [1] r=0 lpr=35 pi=[31,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 36 pg[3.1d( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=35) [1] r=0 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:50:16 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'mirroring'
Jan 31 06:50:16 compute-1 ceph-mon[81728]: 3.a deep-scrub starts
Jan 31 06:50:16 compute-1 ceph-mon[81728]: 3.a deep-scrub ok
Jan 31 06:50:16 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 06:50:16 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 06:50:16 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 06:50:16 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 06:50:16 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 06:50:16 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 06:50:16 compute-1 ceph-mon[81728]: osdmap e35: 3 total, 2 up, 3 in
Jan 31 06:50:16 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:16 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 31 06:50:16 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 31 06:50:16 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 31 06:50:16 compute-1 ceph-mon[81728]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 06:50:16 compute-1 ceph-mon[81728]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 31 06:50:16 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 31 06:50:16 compute-1 ceph-mon[81728]: osdmap e36: 3 total, 2 up, 3 in
Jan 31 06:50:16 compute-1 ceph-mon[81728]: fsmap cephfs:0
Jan 31 06:50:16 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:16 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:16 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'nfs'
Jan 31 06:50:17 compute-1 ceph-mgr[82088]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 31 06:50:17 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:17.134+0000 7f2a62f7f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 31 06:50:17 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'orchestrator'
Jan 31 06:50:17 compute-1 ceph-mon[81728]: from='client.14265 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:50:17 compute-1 ceph-mon[81728]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 06:50:17 compute-1 ceph-mon[81728]: pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:17 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:17 compute-1 ceph-mgr[82088]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 31 06:50:17 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 06:50:17 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:17.948+0000 7f2a62f7f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 31 06:50:18 compute-1 ceph-mon[81728]: from='client.14271 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:50:18 compute-1 ceph-mon[81728]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 06:50:18 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 31 06:50:18 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:50:18 compute-1 ceph-mgr[82088]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 31 06:50:18 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'osd_support'
Jan 31 06:50:18 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:18.289+0000 7f2a62f7f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 31 06:50:18 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e36 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:50:18 compute-1 ceph-mgr[82088]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 31 06:50:18 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 06:50:18 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:18.550+0000 7f2a62f7f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 31 06:50:18 compute-1 ceph-mgr[82088]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 31 06:50:18 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'progress'
Jan 31 06:50:18 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:18.867+0000 7f2a62f7f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 31 06:50:19 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 31 06:50:19 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 31 06:50:19 compute-1 ceph-mgr[82088]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 31 06:50:19 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'prometheus'
Jan 31 06:50:19 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:19.133+0000 7f2a62f7f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 31 06:50:19 compute-1 ceph-mon[81728]: Deploying daemon osd.2 on compute-2
Jan 31 06:50:19 compute-1 ceph-mon[81728]: pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:20 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 31 06:50:20 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 31 06:50:20 compute-1 ceph-mgr[82088]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 31 06:50:20 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'rbd_support'
Jan 31 06:50:20 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:20.178+0000 7f2a62f7f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 31 06:50:20 compute-1 ceph-mgr[82088]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 31 06:50:20 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'restful'
Jan 31 06:50:20 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:20.529+0000 7f2a62f7f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 31 06:50:20 compute-1 ceph-mon[81728]: 3.b scrub starts
Jan 31 06:50:20 compute-1 ceph-mon[81728]: 3.b scrub ok
Jan 31 06:50:20 compute-1 ceph-mon[81728]: 2.7 scrub starts
Jan 31 06:50:20 compute-1 ceph-mon[81728]: 2.7 scrub ok
Jan 31 06:50:20 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/852021558' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 31 06:50:20 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/852021558' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 31 06:50:21 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 31 06:50:21 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 31 06:50:21 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'rgw'
Jan 31 06:50:21 compute-1 ceph-mon[81728]: 3.12 deep-scrub starts
Jan 31 06:50:21 compute-1 ceph-mon[81728]: 3.12 deep-scrub ok
Jan 31 06:50:21 compute-1 ceph-mon[81728]: 2.8 scrub starts
Jan 31 06:50:21 compute-1 ceph-mon[81728]: 2.8 scrub ok
Jan 31 06:50:21 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:21 compute-1 ceph-mon[81728]: pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:21 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/782995466' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 06:50:21 compute-1 ceph-mon[81728]: 2.b scrub starts
Jan 31 06:50:21 compute-1 ceph-mon[81728]: 2.b scrub ok
Jan 31 06:50:22 compute-1 ceph-mgr[82088]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 31 06:50:22 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:22.081+0000 7f2a62f7f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 31 06:50:22 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'rook'
Jan 31 06:50:23 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:50:23 compute-1 ceph-mon[81728]: 3.17 scrub starts
Jan 31 06:50:23 compute-1 ceph-mon[81728]: 3.17 scrub ok
Jan 31 06:50:23 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/2392112215' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 06:50:23 compute-1 ceph-mon[81728]: pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:24 compute-1 ceph-mgr[82088]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 31 06:50:24 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:24.557+0000 7f2a62f7f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 31 06:50:24 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'selftest'
Jan 31 06:50:24 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/3239664565' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 31 06:50:24 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:24 compute-1 ceph-mon[81728]: pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:24 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:24 compute-1 ceph-mgr[82088]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 31 06:50:24 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:24.858+0000 7f2a62f7f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 31 06:50:24 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'snap_schedule'
Jan 31 06:50:25 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 31 06:50:25 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 31 06:50:25 compute-1 ceph-mgr[82088]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 31 06:50:25 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:25.139+0000 7f2a62f7f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 31 06:50:25 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'stats'
Jan 31 06:50:25 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'status'
Jan 31 06:50:25 compute-1 ceph-mgr[82088]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 31 06:50:25 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:25.752+0000 7f2a62f7f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 31 06:50:25 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'telegraf'
Jan 31 06:50:25 compute-1 ceph-mon[81728]: 3.18 scrub starts
Jan 31 06:50:25 compute-1 ceph-mon[81728]: 3.18 scrub ok
Jan 31 06:50:25 compute-1 ceph-mon[81728]: 2.f scrub starts
Jan 31 06:50:25 compute-1 ceph-mon[81728]: 2.f scrub ok
Jan 31 06:50:25 compute-1 ceph-mon[81728]: from='osd.2 [v2:192.168.122.102:6800/1739985396,v1:192.168.122.102:6801/1739985396]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 31 06:50:25 compute-1 ceph-mon[81728]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 31 06:50:26 compute-1 ceph-mgr[82088]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 31 06:50:26 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:26.022+0000 7f2a62f7f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 31 06:50:26 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'telemetry'
Jan 31 06:50:26 compute-1 systemd[72625]: Starting Mark boot as successful...
Jan 31 06:50:26 compute-1 systemd[72625]: Finished Mark boot as successful.
Jan 31 06:50:26 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e37 e37: 3 total, 2 up, 3 in
Jan 31 06:50:26 compute-1 ceph-mgr[82088]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 31 06:50:26 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 06:50:26 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:26.695+0000 7f2a62f7f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 31 06:50:26 compute-1 ceph-mon[81728]: from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 06:50:26 compute-1 ceph-mon[81728]: Standby manager daemon compute-2.iujpur started
Jan 31 06:50:26 compute-1 ceph-mon[81728]: pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:26 compute-1 ceph-mon[81728]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 31 06:50:26 compute-1 ceph-mon[81728]: osdmap e37: 3 total, 2 up, 3 in
Jan 31 06:50:26 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:26 compute-1 ceph-mon[81728]: from='osd.2 [v2:192.168.122.102:6800/1739985396,v1:192.168.122.102:6801/1739985396]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 31 06:50:26 compute-1 ceph-mon[81728]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 31 06:50:27 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 31 06:50:27 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 31 06:50:27 compute-1 ceph-mgr[82088]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 31 06:50:27 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:27.412+0000 7f2a62f7f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 31 06:50:27 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'volumes'
Jan 31 06:50:27 compute-1 sudo[82125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:27 compute-1 sudo[82125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:27 compute-1 sudo[82125]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:27 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e38 e38: 3 total, 2 up, 3 in
Jan 31 06:50:27 compute-1 sudo[82150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:50:27 compute-1 sudo[82150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:27 compute-1 sudo[82150]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[6.1e( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.091624260s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 90.119125366s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[4.1f( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.086977959s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 90.114501953s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[6.1e( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.091624260s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.119125366s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[4.1f( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.086977959s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.114501953s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[6.12( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.091011047s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 90.119102478s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[6.12( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.091011047s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.119102478s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[2.18( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.532875061s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active pruub 86.561264038s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[2.18( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.532875061s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.561264038s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[7.1f( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=13.532229424s) [] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 91.560844421s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[7.11( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=13.532342911s) [] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 91.560966492s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[7.1f( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=13.532229424s) [] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.560844421s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[7.11( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=13.532342911s) [] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.560966492s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[6.17( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.085218430s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 90.114036560s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[6.17( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.085218430s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.114036560s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[4.15( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.085126877s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 90.114028931s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[7.16( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=13.532042503s) [] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 91.560974121s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[7.16( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=13.532042503s) [] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.560974121s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.085106850s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 90.114112854s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.085106850s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.114112854s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[4.15( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.085126877s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.114028931s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[2.12( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.531960487s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active pruub 86.561073303s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[2.12( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.531960487s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.561073303s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.084376335s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 90.113632202s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.084355354s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 90.113632202s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.084376335s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.113632202s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.084355354s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.113632202s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[4.9( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.084621429s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 90.114021301s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[2.f( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.531613350s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active pruub 86.561019897s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[4.9( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.084621429s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.114021301s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[2.f( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.531613350s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.561019897s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[4.8( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.084036827s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 90.113517761s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[4.8( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.084036827s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.113517761s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[2.b( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.531224251s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active pruub 86.560806274s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[5.4( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.083590508s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 90.113212585s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[2.b( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.531224251s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.560806274s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[5.4( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.083590508s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.113212585s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[7.5( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=13.532470703s) [] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 active pruub 91.562210083s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[2.5( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.530947685s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active pruub 86.560699463s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[2.5( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.530947685s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.560699463s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[7.5( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=38 pruub=13.532470703s) [] r=-1 lpr=38 pi=[31,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.562210083s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[4.1( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.082990646s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 90.112869263s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.082859039s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 90.112762451s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[4.1( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.082990646s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.112869263s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.082859039s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.112762451s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[5.e( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.082643509s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 90.112632751s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.082159996s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 90.112182617s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.082159996s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.112182617s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.089022636s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 90.119110107s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.089022636s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.119110107s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[2.1c( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.512864113s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active pruub 86.543067932s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[5.1a( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.081890106s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 90.112113953s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[2.1c( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.512864113s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.543067932s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[5.1a( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.081890106s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.112113953s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[2.1d( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.530033112s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active pruub 86.560386658s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[2.1d( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.530033112s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.560386658s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[5.e( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.082643509s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.112632751s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[6.1c( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.083996773s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 active pruub 90.114524841s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 38 pg[6.1c( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=35/35 les/c/f=36/36/0 sis=38 pruub=12.083996773s) [] r=-1 lpr=38 pi=[35,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.114524841s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:27 compute-1 sudo[82175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:27 compute-1 sudo[82175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:27 compute-1 sudo[82175]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:27 compute-1 sudo[82200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:50:27 compute-1 sudo[82200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:27 compute-1 sudo[82200]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:27 compute-1 sudo[82225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:27 compute-1 sudo[82225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:27 compute-1 sudo[82225]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:27 compute-1 sudo[82250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 06:50:27 compute-1 sudo[82250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:28 compute-1 ceph-mgr[82088]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 31 06:50:28 compute-1 ceph-mgr[82088]: mgr[py] Loading python module 'zabbix'
Jan 31 06:50:28 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:28.201+0000 7f2a62f7f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 31 06:50:28 compute-1 ceph-mon[81728]: mgrmap e10: compute-0.gghdjs(active, since 2m), standbys: compute-2.iujpur
Jan 31 06:50:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mgr metadata", "who": "compute-2.iujpur", "id": "compute-2.iujpur"}]: dispatch
Jan 31 06:50:28 compute-1 ceph-mon[81728]: 2.11 scrub starts
Jan 31 06:50:28 compute-1 ceph-mon[81728]: 2.11 scrub ok
Jan 31 06:50:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:28 compute-1 ceph-mon[81728]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 31 06:50:28 compute-1 ceph-mon[81728]: osdmap e38: 3 total, 2 up, 3 in
Jan 31 06:50:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:28 compute-1 ceph-mgr[82088]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 31 06:50:28 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mgr-compute-1-hglnzn[82084]: 2026-01-31T06:50:28.462+0000 7f2a62f7f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 31 06:50:28 compute-1 ceph-mgr[82088]: ms_deliver_dispatch: unhandled message 0x56050d96d600 mon_map magic: 0 v1 from mon.2 v2:192.168.122.101:3300/0
Jan 31 06:50:28 compute-1 ceph-mgr[82088]: client.0 ms_handle_reset on v2:192.168.122.100:6800/4113492602
Jan 31 06:50:28 compute-1 podman[82349]: 2026-01-31 06:50:28.641034919 +0000 UTC m=+0.337287073 container exec 6672f2dbf6180479da9a6d4a3be2a0622c308e279fdc39713e442740f41af4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 06:50:28 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:50:28 compute-1 podman[82349]: 2026-01-31 06:50:28.739289018 +0000 UTC m=+0.435541152 container exec_died 6672f2dbf6180479da9a6d4a3be2a0622c308e279fdc39713e442740f41af4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Jan 31 06:50:29 compute-1 sudo[82250]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:29 compute-1 ceph-mgr[82088]: client.0 ms_handle_reset on v2:192.168.122.100:6800/4113492602
Jan 31 06:50:29 compute-1 ceph-mon[81728]: purged_snaps scrub starts
Jan 31 06:50:29 compute-1 ceph-mon[81728]: purged_snaps scrub ok
Jan 31 06:50:29 compute-1 ceph-mon[81728]: from='client.14307 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 06:50:29 compute-1 ceph-mon[81728]: pgmap v111: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:29 compute-1 ceph-mon[81728]: Standby manager daemon compute-1.hglnzn started
Jan 31 06:50:29 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:30 compute-1 sudo[82436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:30 compute-1 sudo[82436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:30 compute-1 sudo[82436]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:30 compute-1 ceph-mon[81728]: 3.19 scrub starts
Jan 31 06:50:30 compute-1 ceph-mon[81728]: 3.19 scrub ok
Jan 31 06:50:30 compute-1 ceph-mon[81728]: from='client.14313 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 06:50:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:30 compute-1 ceph-mon[81728]: mgrmap e11: compute-0.gghdjs(active, since 2m), standbys: compute-2.iujpur, compute-1.hglnzn
Jan 31 06:50:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mgr metadata", "who": "compute-1.hglnzn", "id": "compute-1.hglnzn"}]: dispatch
Jan 31 06:50:30 compute-1 ceph-mon[81728]: pgmap v112: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:30 compute-1 sudo[82461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:50:30 compute-1 sudo[82461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:30 compute-1 sudo[82461]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:31 compute-1 sudo[82486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:31 compute-1 sudo[82486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:31 compute-1 sudo[82486]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:31 compute-1 sudo[82511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 06:50:31 compute-1 sudo[82511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:31 compute-1 sudo[82511]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:32 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:32 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:32 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:32 compute-1 ceph-mon[81728]: from='client.14319 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 06:50:32 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:32 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:33 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:50:33 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 31 06:50:33 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 31 06:50:34 compute-1 ceph-mon[81728]: 3.1e scrub starts
Jan 31 06:50:34 compute-1 ceph-mon[81728]: 3.1e scrub ok
Jan 31 06:50:34 compute-1 ceph-mon[81728]: pgmap v113: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:34 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:34 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/2803010226' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 06:50:34 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:35 compute-1 ceph-mon[81728]: 3.1f deep-scrub starts
Jan 31 06:50:35 compute-1 ceph-mon[81728]: 3.1f deep-scrub ok
Jan 31 06:50:35 compute-1 ceph-mon[81728]: 2.14 scrub starts
Jan 31 06:50:35 compute-1 ceph-mon[81728]: 2.14 scrub ok
Jan 31 06:50:35 compute-1 ceph-mon[81728]: pgmap v114: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:35 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:35 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/1673261415' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 06:50:38 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:50:40 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 31 06:50:40 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 31 06:50:40 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:40 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/4025839986' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 31 06:50:40 compute-1 ceph-mon[81728]: pgmap v115: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:41 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 31 06:50:41 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 31 06:50:42 compute-1 ceph-mon[81728]: 4.4 scrub starts
Jan 31 06:50:42 compute-1 ceph-mon[81728]: 4.4 scrub ok
Jan 31 06:50:42 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:42 compute-1 ceph-mon[81728]: 4.7 scrub starts
Jan 31 06:50:42 compute-1 ceph-mon[81728]: 4.7 scrub ok
Jan 31 06:50:42 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:42 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/1596617942' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 31 06:50:42 compute-1 ceph-mon[81728]: pgmap v116: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:42 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:42 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:42 compute-1 ceph-mon[81728]: 2.16 scrub starts
Jan 31 06:50:42 compute-1 ceph-mon[81728]: 2.16 scrub ok
Jan 31 06:50:42 compute-1 ceph-mon[81728]: 4.b scrub starts
Jan 31 06:50:42 compute-1 ceph-mon[81728]: pgmap v117: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:42 compute-1 ceph-mon[81728]: 4.b scrub ok
Jan 31 06:50:42 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:43 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 31 06:50:43 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 31 06:50:43 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:50:44 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 31 06:50:44 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 31 06:50:47 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 31 06:50:47 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 31 06:50:47 compute-1 ceph-mon[81728]: 2.17 scrub starts
Jan 31 06:50:47 compute-1 ceph-mon[81728]: 2.17 scrub ok
Jan 31 06:50:47 compute-1 ceph-mon[81728]: 4.f scrub starts
Jan 31 06:50:47 compute-1 ceph-mon[81728]: 4.f scrub ok
Jan 31 06:50:47 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:47 compute-1 ceph-mon[81728]: pgmap v118: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:47 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:48 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 31 06:50:48 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 31 06:50:48 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:50:48 compute-1 sudo[82567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:48 compute-1 sudo[82567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:48 compute-1 sudo[82567]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:48 compute-1 sudo[82592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 31 06:50:48 compute-1 sudo[82592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:48 compute-1 sudo[82592]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:48 compute-1 sudo[82617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:48 compute-1 sudo[82617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:48 compute-1 sudo[82617]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:48 compute-1 ceph-mon[81728]: 2.1a scrub starts
Jan 31 06:50:48 compute-1 ceph-mon[81728]: 2.1a scrub ok
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:48 compute-1 ceph-mon[81728]: 7.1 scrub starts
Jan 31 06:50:48 compute-1 ceph-mon[81728]: pgmap v119: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:48 compute-1 ceph-mon[81728]: 7.1 scrub ok
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:48 compute-1 ceph-mon[81728]: pgmap v120: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:48 compute-1 ceph-mon[81728]: 7.7 scrub starts
Jan 31 06:50:48 compute-1 ceph-mon[81728]: 7.7 scrub ok
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:48 compute-1 ceph-mon[81728]: 7.c scrub starts
Jan 31 06:50:48 compute-1 ceph-mon[81728]: 7.c scrub ok
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [0, 2]}]: dispatch
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.a", "id": [1, 2]}]: dispatch
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.b", "id": [0, 2]}]: dispatch
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.13", "id": [0, 2]}]: dispatch
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.14", "id": [0, 2]}]: dispatch
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.1f", "id": [0, 2]}]: dispatch
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.6", "id": [0, 2]}]: dispatch
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.c", "id": [1, 2]}]: dispatch
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.10", "id": [0, 2]}]: dispatch
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.1e", "id": [0, 2]}]: dispatch
Jan 31 06:50:48 compute-1 ceph-mon[81728]: pgmap v121: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:50:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 06:50:48 compute-1 sudo[82642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/etc/ceph
Jan 31 06:50:48 compute-1 sudo[82642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:48 compute-1 sudo[82642]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:48 compute-1 sudo[82667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:48 compute-1 sudo[82667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:48 compute-1 sudo[82667]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:49 compute-1 sudo[82692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/etc/ceph/ceph.conf.new
Jan 31 06:50:49 compute-1 sudo[82692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:49 compute-1 sudo[82692]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:49 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e39 e39: 3 total, 2 up, 3 in
Jan 31 06:50:49 compute-1 sudo[82717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:49 compute-1 sudo[82717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:49 compute-1 sudo[82717]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:49 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e39 crush map has features 3314933000854323200, adjusting msgr requires
Jan 31 06:50:49 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e39 crush map has features 432629239337189376, adjusting msgr requires
Jan 31 06:50:49 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e39 crush map has features 432629239337189376, adjusting msgr requires
Jan 31 06:50:49 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e39 crush map has features 432629239337189376, adjusting msgr requires
Jan 31 06:50:49 compute-1 ceph-osd[79145]: osd.1 39 crush map has features 432629239337189376, adjusting msgr requires for clients
Jan 31 06:50:49 compute-1 ceph-osd[79145]: osd.1 39 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Jan 31 06:50:49 compute-1 ceph-osd[79145]: osd.1 39 crush map has features 3314933000854323200, adjusting msgr requires for osds
Jan 31 06:50:49 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 39 pg[6.a( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=14.627057076s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 114.114028931s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:49 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 39 pg[6.a( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=14.627057076s) [] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 114.114028931s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:49 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 39 pg[7.c( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=39 pruub=8.074358940s) [] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 107.562339783s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:49 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 39 pg[7.c( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=39 pruub=8.074358940s) [] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.562339783s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:49 compute-1 sudo[82742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 06:50:49 compute-1 sudo[82742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:49 compute-1 sudo[82742]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:49 compute-1 sudo[82767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:49 compute-1 sudo[82767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:49 compute-1 sudo[82767]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:49 compute-1 sudo[82792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/etc/ceph/ceph.conf.new
Jan 31 06:50:49 compute-1 sudo[82792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:49 compute-1 sudo[82792]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:49 compute-1 sudo[82840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:49 compute-1 sudo[82840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:49 compute-1 sudo[82840]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:49 compute-1 sudo[82865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/etc/ceph/ceph.conf.new
Jan 31 06:50:49 compute-1 sudo[82865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:49 compute-1 sudo[82865]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:49 compute-1 sudo[82890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:49 compute-1 sudo[82890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:49 compute-1 sudo[82890]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:49 compute-1 sudo[82915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/etc/ceph/ceph.conf.new
Jan 31 06:50:49 compute-1 sudo[82915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:49 compute-1 sudo[82915]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:49 compute-1 sudo[82940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:49 compute-1 sudo[82940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:49 compute-1 sudo[82940]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:49 compute-1 sudo[82965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 31 06:50:49 compute-1 sudo[82965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:49 compute-1 sudo[82965]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:49 compute-1 sudo[82990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:49 compute-1 sudo[82990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:49 compute-1 sudo[82990]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:49 compute-1 sudo[83015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config
Jan 31 06:50:49 compute-1 sudo[83015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:49 compute-1 sudo[83015]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:49 compute-1 sudo[83040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:49 compute-1 sudo[83040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:49 compute-1 sudo[83040]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:49 compute-1 sudo[83065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config
Jan 31 06:50:49 compute-1 sudo[83065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:49 compute-1 sudo[83065]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:49 compute-1 sudo[83090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:49 compute-1 sudo[83090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:49 compute-1 sudo[83090]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:49 compute-1 sudo[83115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf.new
Jan 31 06:50:49 compute-1 sudo[83115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:49 compute-1 sudo[83115]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:49 compute-1 sudo[83140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:49 compute-1 sudo[83140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:49 compute-1 sudo[83140]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:49 compute-1 sudo[83165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 06:50:49 compute-1 sudo[83165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:49 compute-1 sudo[83165]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:50 compute-1 sudo[83190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:50 compute-1 sudo[83190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:50 compute-1 sudo[83190]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:50 compute-1 sudo[83215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf.new
Jan 31 06:50:50 compute-1 sudo[83215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:50 compute-1 sudo[83215]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:50 compute-1 ceph-mon[81728]: Adjusting osd_memory_target on compute-2 to 127.9M
Jan 31 06:50:50 compute-1 ceph-mon[81728]: Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Jan 31 06:50:50 compute-1 ceph-mon[81728]: Updating compute-0:/etc/ceph/ceph.conf
Jan 31 06:50:50 compute-1 ceph-mon[81728]: Updating compute-1:/etc/ceph/ceph.conf
Jan 31 06:50:50 compute-1 ceph-mon[81728]: Updating compute-2:/etc/ceph/ceph.conf
Jan 31 06:50:50 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [0, 2]}]': finished
Jan 31 06:50:50 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.a", "id": [1, 2]}]': finished
Jan 31 06:50:50 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.b", "id": [0, 2]}]': finished
Jan 31 06:50:50 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.13", "id": [0, 2]}]': finished
Jan 31 06:50:50 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.14", "id": [0, 2]}]': finished
Jan 31 06:50:50 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.1f", "id": [0, 2]}]': finished
Jan 31 06:50:50 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.6", "id": [0, 2]}]': finished
Jan 31 06:50:50 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.c", "id": [1, 2]}]': finished
Jan 31 06:50:50 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.10", "id": [0, 2]}]': finished
Jan 31 06:50:50 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "7.1e", "id": [0, 2]}]': finished
Jan 31 06:50:50 compute-1 ceph-mon[81728]: osdmap e39: 3 total, 2 up, 3 in
Jan 31 06:50:50 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:50 compute-1 ceph-mon[81728]: 4.10 scrub starts
Jan 31 06:50:50 compute-1 ceph-mon[81728]: 4.10 scrub ok
Jan 31 06:50:50 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:50 compute-1 sudo[83263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:50 compute-1 sudo[83263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:50 compute-1 sudo[83263]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:50 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 31 06:50:50 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 31 06:50:50 compute-1 sudo[83288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf.new
Jan 31 06:50:50 compute-1 sudo[83288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:50 compute-1 sudo[83288]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:50 compute-1 sudo[83313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:50 compute-1 sudo[83313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:50 compute-1 sudo[83313]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:50 compute-1 sudo[83338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf.new
Jan 31 06:50:50 compute-1 sudo[83338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:50 compute-1 sudo[83338]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:50 compute-1 sudo[83363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:50 compute-1 sudo[83363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:50 compute-1 sudo[83363]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:50 compute-1 sudo[83388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf.new /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 06:50:50 compute-1 sudo[83388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:50 compute-1 sudo[83388]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:51 compute-1 ceph-mon[81728]: Updating compute-1:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 06:50:51 compute-1 ceph-mon[81728]: Updating compute-2:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 06:50:51 compute-1 ceph-mon[81728]: Updating compute-0:/var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/config/ceph.conf
Jan 31 06:50:51 compute-1 ceph-mon[81728]: 7.d scrub starts
Jan 31 06:50:51 compute-1 ceph-mon[81728]: 7.d scrub ok
Jan 31 06:50:51 compute-1 ceph-mon[81728]: pgmap v123: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:51 compute-1 ceph-mon[81728]: 4.11 scrub starts
Jan 31 06:50:51 compute-1 ceph-mon[81728]: 4.11 scrub ok
Jan 31 06:50:51 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:51 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:51 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:51 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:51 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:51 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:51 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:51 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:51 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 06:50:51 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 06:50:51 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:50:52 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:53 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 31 06:50:53 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 31 06:50:53 compute-1 ceph-mon[81728]: pgmap v124: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 31 06:50:53 compute-1 ceph-mon[81728]: 4.12 scrub starts
Jan 31 06:50:53 compute-1 ceph-mon[81728]: 4.12 scrub ok
Jan 31 06:50:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:53 compute-1 ceph-mon[81728]: OSD bench result of 4413.448469 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 06:50:53 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e40 e40: 3 total, 3 up, 3 in
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[4.1f( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[6.1e( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=40) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[6.1e( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[4.1f( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=40) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[6.1c( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[6.1c( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[2.18( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[6.12( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[6.12( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[2.18( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=40) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[6.17( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=40) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[6.17( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[4.15( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[4.15( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[2.12( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[7.16( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=40) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[2.12( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[7.16( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=40) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[4.9( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[4.9( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[2.f( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[2.f( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[6.a( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=10.295711517s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 114.114028931s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[6.a( empty local-lis/les=35/36 n=0 ec=31/19 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=10.295683861s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 114.114028931s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[4.8( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[4.8( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[2.b( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[2.b( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[5.4( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[5.4( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=40) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=40) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[2.5( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[2.5( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[4.1( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[4.1( empty local-lis/les=35/36 n=0 ec=28/15 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[5.e( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[5.e( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=40 pruub=3.743342638s) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.562339783s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=31/33 n=0 ec=31/21 lis/c=31/31 les/c/f=33/33/0 sis=40 pruub=3.743305206s) [2] r=-1 lpr=40 pi=[31,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.562339783s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[3.1a( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[3.1a( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[2.1c( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[2.1c( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=35/36 n=0 ec=27/14 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[5.1a( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[5.1a( empty local-lis/les=35/36 n=0 ec=29/17 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[2.1d( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:50:53 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 40 pg[2.1d( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:50:53 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:50:54 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e41 e41: 3 total, 3 up, 3 in
Jan 31 06:50:54 compute-1 ceph-mon[81728]: 7.12 scrub starts
Jan 31 06:50:54 compute-1 ceph-mon[81728]: 7.12 scrub ok
Jan 31 06:50:54 compute-1 ceph-mon[81728]: osd.2 [v2:192.168.122.102:6800/1739985396,v1:192.168.122.102:6801/1739985396] boot
Jan 31 06:50:54 compute-1 ceph-mon[81728]: osdmap e40: 3 total, 3 up, 3 in
Jan 31 06:50:54 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 06:50:54 compute-1 ceph-mon[81728]: pgmap v126: 193 pgs: 31 peering, 162 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:50:55 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 31 06:50:55 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 31 06:50:55 compute-1 ceph-mon[81728]: 4.16 scrub starts
Jan 31 06:50:55 compute-1 ceph-mon[81728]: 4.16 scrub ok
Jan 31 06:50:55 compute-1 ceph-mon[81728]: osdmap e41: 3 total, 3 up, 3 in
Jan 31 06:50:56 compute-1 ceph-mon[81728]: 7.15 scrub starts
Jan 31 06:50:56 compute-1 ceph-mon[81728]: 7.15 scrub ok
Jan 31 06:50:56 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:56 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:56 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.fbgckm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 06:50:56 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.fbgckm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 06:50:56 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:56 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:50:56 compute-1 ceph-mon[81728]: Deploying daemon rgw.rgw.compute-2.fbgckm on compute-2
Jan 31 06:50:56 compute-1 ceph-mon[81728]: pgmap v128: 193 pgs: 31 peering, 162 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:50:57 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 31 06:50:57 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 31 06:50:57 compute-1 ceph-mon[81728]: 7.17 scrub starts
Jan 31 06:50:57 compute-1 ceph-mon[81728]: 7.17 scrub ok
Jan 31 06:50:58 compute-1 sshd-session[71371]: Received disconnect from 38.102.83.142 port 39992:11: disconnected by user
Jan 31 06:50:58 compute-1 sshd-session[71371]: Disconnected from user zuul 38.102.83.142 port 39992
Jan 31 06:50:58 compute-1 sshd-session[71368]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:50:58 compute-1 systemd[1]: session-19.scope: Deactivated successfully.
Jan 31 06:50:58 compute-1 systemd[1]: session-19.scope: Consumed 7.416s CPU time.
Jan 31 06:50:58 compute-1 systemd-logind[788]: Session 19 logged out. Waiting for processes to exit.
Jan 31 06:50:58 compute-1 systemd-logind[788]: Removed session 19.
Jan 31 06:50:58 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:50:58 compute-1 sudo[83413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:58 compute-1 sudo[83413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:58 compute-1 sudo[83413]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:58 compute-1 ceph-mon[81728]: 4.17 deep-scrub starts
Jan 31 06:50:58 compute-1 ceph-mon[81728]: 4.17 deep-scrub ok
Jan 31 06:50:58 compute-1 ceph-mon[81728]: 7.16 deep-scrub starts
Jan 31 06:50:58 compute-1 ceph-mon[81728]: 7.16 deep-scrub ok
Jan 31 06:50:58 compute-1 ceph-mon[81728]: pgmap v129: 193 pgs: 31 peering, 162 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:50:58 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:58 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:58 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:58 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.izlkft", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 06:50:58 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.izlkft", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 06:50:58 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:50:58 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:50:58 compute-1 sudo[83438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:50:58 compute-1 sudo[83438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:58 compute-1 sudo[83438]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:58 compute-1 sudo[83463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:50:58 compute-1 sudo[83463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:58 compute-1 sudo[83463]: pam_unix(sudo:session): session closed for user root
Jan 31 06:50:58 compute-1 sudo[83488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 06:50:58 compute-1 sudo[83488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:50:59 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 31 06:50:59 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 31 06:50:59 compute-1 podman[83551]: 2026-01-31 06:50:59.245222049 +0000 UTC m=+0.044279074 container create 5b6047eaab9e51006b1c837b0d8235d41bcf2b8e2787b2bfe607e8d5d09e80ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 06:50:59 compute-1 systemd[1]: Started libpod-conmon-5b6047eaab9e51006b1c837b0d8235d41bcf2b8e2787b2bfe607e8d5d09e80ad.scope.
Jan 31 06:50:59 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:50:59 compute-1 podman[83551]: 2026-01-31 06:50:59.301798672 +0000 UTC m=+0.100855717 container init 5b6047eaab9e51006b1c837b0d8235d41bcf2b8e2787b2bfe607e8d5d09e80ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 06:50:59 compute-1 podman[83551]: 2026-01-31 06:50:59.307301025 +0000 UTC m=+0.106358050 container start 5b6047eaab9e51006b1c837b0d8235d41bcf2b8e2787b2bfe607e8d5d09e80ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:50:59 compute-1 podman[83551]: 2026-01-31 06:50:59.310733585 +0000 UTC m=+0.109790710 container attach 5b6047eaab9e51006b1c837b0d8235d41bcf2b8e2787b2bfe607e8d5d09e80ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 06:50:59 compute-1 festive_payne[83567]: 167 167
Jan 31 06:50:59 compute-1 systemd[1]: libpod-5b6047eaab9e51006b1c837b0d8235d41bcf2b8e2787b2bfe607e8d5d09e80ad.scope: Deactivated successfully.
Jan 31 06:50:59 compute-1 conmon[83567]: conmon 5b6047eaab9e51006b1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5b6047eaab9e51006b1c837b0d8235d41bcf2b8e2787b2bfe607e8d5d09e80ad.scope/container/memory.events
Jan 31 06:50:59 compute-1 podman[83551]: 2026-01-31 06:50:59.312177142 +0000 UTC m=+0.111234167 container died 5b6047eaab9e51006b1c837b0d8235d41bcf2b8e2787b2bfe607e8d5d09e80ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:50:59 compute-1 podman[83551]: 2026-01-31 06:50:59.227044665 +0000 UTC m=+0.026101730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:50:59 compute-1 systemd[1]: var-lib-containers-storage-overlay-026dbc16ee114cb2d1a53af25f2d185ee8cdd7fa4bb7b57862917f09bb32daa7-merged.mount: Deactivated successfully.
Jan 31 06:50:59 compute-1 podman[83551]: 2026-01-31 06:50:59.352274427 +0000 UTC m=+0.151331452 container remove 5b6047eaab9e51006b1c837b0d8235d41bcf2b8e2787b2bfe607e8d5d09e80ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 06:50:59 compute-1 systemd[1]: libpod-conmon-5b6047eaab9e51006b1c837b0d8235d41bcf2b8e2787b2bfe607e8d5d09e80ad.scope: Deactivated successfully.
Jan 31 06:50:59 compute-1 systemd[1]: Reloading.
Jan 31 06:50:59 compute-1 systemd-rc-local-generator[83613]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:50:59 compute-1 systemd-sysv-generator[83616]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:50:59 compute-1 systemd[1]: Reloading.
Jan 31 06:50:59 compute-1 systemd-rc-local-generator[83650]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:50:59 compute-1 systemd-sysv-generator[83654]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:50:59 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e42 e42: 3 total, 3 up, 3 in
Jan 31 06:50:59 compute-1 systemd[1]: Starting Ceph rgw.rgw.compute-1.izlkft for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a...
Jan 31 06:50:59 compute-1 ceph-mon[81728]: Deploying daemon rgw.rgw.compute-1.izlkft on compute-1
Jan 31 06:50:59 compute-1 ceph-mon[81728]: 7.11 scrub starts
Jan 31 06:50:59 compute-1 ceph-mon[81728]: 7.11 scrub ok
Jan 31 06:50:59 compute-1 ceph-mon[81728]: 7.19 scrub starts
Jan 31 06:50:59 compute-1 ceph-mon[81728]: 7.19 scrub ok
Jan 31 06:50:59 compute-1 ceph-mon[81728]: 4.1e scrub starts
Jan 31 06:50:59 compute-1 ceph-mon[81728]: 4.1e scrub ok
Jan 31 06:50:59 compute-1 ceph-mon[81728]: osdmap e42: 3 total, 3 up, 3 in
Jan 31 06:50:59 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/2214860940' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 31 06:50:59 compute-1 ceph-mon[81728]: from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 31 06:51:00 compute-1 podman[83711]: 2026-01-31 06:51:00.108307075 +0000 UTC m=+0.057975781 container create 25e09241b643f32ef708262b43c18b05954f8cd577a912de124d96fde2c1c32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-rgw-rgw-compute-1-izlkft, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 06:51:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d5b9bdc8f7be931078def19ed2c5a78889cff682c42127fd817e74a62d931c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:51:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d5b9bdc8f7be931078def19ed2c5a78889cff682c42127fd817e74a62d931c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:51:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d5b9bdc8f7be931078def19ed2c5a78889cff682c42127fd817e74a62d931c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:51:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d5b9bdc8f7be931078def19ed2c5a78889cff682c42127fd817e74a62d931c/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.izlkft supports timestamps until 2038 (0x7fffffff)
Jan 31 06:51:00 compute-1 podman[83711]: 2026-01-31 06:51:00.16915732 +0000 UTC m=+0.118826036 container init 25e09241b643f32ef708262b43c18b05954f8cd577a912de124d96fde2c1c32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-rgw-rgw-compute-1-izlkft, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 06:51:00 compute-1 podman[83711]: 2026-01-31 06:51:00.172977249 +0000 UTC m=+0.122645945 container start 25e09241b643f32ef708262b43c18b05954f8cd577a912de124d96fde2c1c32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-rgw-rgw-compute-1-izlkft, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:51:00 compute-1 bash[83711]: 25e09241b643f32ef708262b43c18b05954f8cd577a912de124d96fde2c1c32d
Jan 31 06:51:00 compute-1 podman[83711]: 2026-01-31 06:51:00.083719725 +0000 UTC m=+0.033388451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:51:00 compute-1 systemd[1]: Started Ceph rgw.rgw.compute-1.izlkft for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a.
Jan 31 06:51:00 compute-1 sudo[83488]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:00 compute-1 radosgw[83730]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 31 06:51:00 compute-1 radosgw[83730]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Jan 31 06:51:00 compute-1 radosgw[83730]: framework: beast
Jan 31 06:51:00 compute-1 radosgw[83730]: framework conf key: endpoint, val: 192.168.122.101:8082
Jan 31 06:51:00 compute-1 radosgw[83730]: init_numa not setting numa affinity
Jan 31 06:51:00 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e43 e43: 3 total, 3 up, 3 in
Jan 31 06:51:00 compute-1 ceph-mon[81728]: 6.17 scrub starts
Jan 31 06:51:00 compute-1 ceph-mon[81728]: 6.17 scrub ok
Jan 31 06:51:00 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:00 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:00 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:00 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ibblfd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 06:51:00 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.ibblfd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 06:51:00 compute-1 ceph-mon[81728]: pgmap v131: 194 pgs: 1 unknown, 1 active+clean+laggy, 192 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:51:00 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:00 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:51:00 compute-1 ceph-mon[81728]: Deploying daemon rgw.rgw.compute-0.ibblfd on compute-0
Jan 31 06:51:00 compute-1 ceph-mon[81728]: 5.3 deep-scrub starts
Jan 31 06:51:00 compute-1 ceph-mon[81728]: 5.3 deep-scrub ok
Jan 31 06:51:00 compute-1 ceph-mon[81728]: from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 31 06:51:00 compute-1 ceph-mon[81728]: osdmap e43: 3 total, 3 up, 3 in
Jan 31 06:51:01 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e44 e44: 3 total, 3 up, 3 in
Jan 31 06:51:01 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 31 06:51:01 compute-1 ceph-mon[81728]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2878303115' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 06:51:01 compute-1 ceph-mon[81728]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 06:51:01 compute-1 ceph-mon[81728]: osdmap e44: 3 total, 3 up, 3 in
Jan 31 06:51:01 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2878303115' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 06:51:01 compute-1 ceph-mon[81728]: from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 06:51:01 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/2214860940' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 06:51:01 compute-1 ceph-mon[81728]: from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 06:51:01 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:01 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:01 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:01 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:01 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:01 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.wcykmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 06:51:01 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.wcykmw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 06:51:01 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:51:02 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 31 06:51:02 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 31 06:51:02 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e45 e45: 3 total, 3 up, 3 in
Jan 31 06:51:02 compute-1 ceph-mon[81728]: 3.15 scrub starts
Jan 31 06:51:02 compute-1 ceph-mon[81728]: 3.15 scrub ok
Jan 31 06:51:02 compute-1 ceph-mon[81728]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 06:51:02 compute-1 ceph-mon[81728]: Deploying daemon mds.cephfs.compute-2.wcykmw on compute-2
Jan 31 06:51:02 compute-1 ceph-mon[81728]: 7.1a scrub starts
Jan 31 06:51:02 compute-1 ceph-mon[81728]: 7.1a scrub ok
Jan 31 06:51:02 compute-1 ceph-mon[81728]: pgmap v134: 195 pgs: 2 unknown, 1 active+clean+laggy, 192 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:51:02 compute-1 ceph-mon[81728]: from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 06:51:02 compute-1 ceph-mon[81728]: from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 06:51:02 compute-1 ceph-mon[81728]: osdmap e45: 3 total, 3 up, 3 in
Jan 31 06:51:03 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 31 06:51:03 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 31 06:51:03 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:51:03 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e46 e46: 3 total, 3 up, 3 in
Jan 31 06:51:03 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 46 pg[10.0( empty local-lis/les=0/0 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [1] r=0 lpr=46 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:03 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 31 06:51:03 compute-1 ceph-mon[81728]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2878303115' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 06:51:03 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).mds e3 new map
Jan 31 06:51:03 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T06:50:15.676838+0000
                                           modified        2026-01-31T06:50:15.676874+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.wcykmw{-1:24139} state up:standby seq 1 addr [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 06:51:03 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).mds e4 new map
Jan 31 06:51:03 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T06:50:15.676838+0000
                                           modified        2026-01-31T06:51:03.941771+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24139}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.wcykmw{0:24139} state up:creating seq 1 addr [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Jan 31 06:51:03 compute-1 ceph-mon[81728]: 6.12 scrub starts
Jan 31 06:51:03 compute-1 ceph-mon[81728]: 6.12 scrub ok
Jan 31 06:51:03 compute-1 ceph-mon[81728]: 7.1c scrub starts
Jan 31 06:51:03 compute-1 ceph-mon[81728]: 7.1c scrub ok
Jan 31 06:51:03 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:03 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:03 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:03 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.kanoes", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 06:51:03 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.kanoes", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 06:51:03 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:51:03 compute-1 ceph-mon[81728]: osdmap e46: 3 total, 3 up, 3 in
Jan 31 06:51:03 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/3280907012' entity='client.rgw.rgw.compute-0.ibblfd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 06:51:03 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2878303115' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 06:51:03 compute-1 ceph-mon[81728]: from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 06:51:03 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/2214860940' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 06:51:03 compute-1 ceph-mon[81728]: from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 06:51:04 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 31 06:51:04 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 31 06:51:04 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Jan 31 06:51:04 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 47 pg[10.0( empty local-lis/les=46/47 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [1] r=0 lpr=46 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:04 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 31 06:51:04 compute-1 ceph-mon[81728]: Deploying daemon mds.cephfs.compute-0.kanoes on compute-0
Jan 31 06:51:04 compute-1 ceph-mon[81728]: 2.18 scrub starts
Jan 31 06:51:04 compute-1 ceph-mon[81728]: 2.18 scrub ok
Jan 31 06:51:04 compute-1 ceph-mon[81728]: mds.? [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] up:boot
Jan 31 06:51:04 compute-1 ceph-mon[81728]: daemon mds.cephfs.compute-2.wcykmw assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 31 06:51:04 compute-1 ceph-mon[81728]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 31 06:51:04 compute-1 ceph-mon[81728]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 31 06:51:04 compute-1 ceph-mon[81728]: fsmap cephfs:0 1 up:standby
Jan 31 06:51:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.wcykmw"}]: dispatch
Jan 31 06:51:04 compute-1 ceph-mon[81728]: fsmap cephfs:1 {0=cephfs.compute-2.wcykmw=up:creating}
Jan 31 06:51:04 compute-1 ceph-mon[81728]: daemon mds.cephfs.compute-2.wcykmw is now active in filesystem cephfs as rank 0
Jan 31 06:51:04 compute-1 ceph-mon[81728]: 5.18 scrub starts
Jan 31 06:51:04 compute-1 ceph-mon[81728]: 5.18 scrub ok
Jan 31 06:51:04 compute-1 ceph-mon[81728]: pgmap v137: 196 pgs: 1 unknown, 1 active+clean+laggy, 194 active+clean; 451 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Jan 31 06:51:04 compute-1 ceph-mon[81728]: 5.5 deep-scrub starts
Jan 31 06:51:04 compute-1 ceph-mon[81728]: 5.5 deep-scrub ok
Jan 31 06:51:04 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/3280907012' entity='client.rgw.rgw.compute-0.ibblfd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 06:51:04 compute-1 ceph-mon[81728]: from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 06:51:04 compute-1 ceph-mon[81728]: from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 06:51:04 compute-1 ceph-mon[81728]: osdmap e47: 3 total, 3 up, 3 in
Jan 31 06:51:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:04 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 31 06:51:05 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).mds e5 new map
Jan 31 06:51:05 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T06:50:15.676838+0000
                                           modified        2026-01-31T06:51:04.968424+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24139}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.wcykmw{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.kanoes{-1:14361} state up:standby seq 1 addr [v2:192.168.122.100:6806/3481669750,v1:192.168.122.100:6807/3481669750] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 06:51:05 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).mds e6 new map
Jan 31 06:51:05 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).mds e6 print_map
                                           e6
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T06:50:15.676838+0000
                                           modified        2026-01-31T06:51:04.968424+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24139}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.wcykmw{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.kanoes{-1:14361} state up:standby seq 1 addr [v2:192.168.122.100:6806/3481669750,v1:192.168.122.100:6807/3481669750] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 06:51:05 compute-1 sudo[83790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:51:05 compute-1 sudo[83790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:05 compute-1 sudo[83790]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:05 compute-1 sudo[83815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:51:05 compute-1 sudo[83815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:05 compute-1 sudo[83815]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:05 compute-1 sudo[83840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:51:05 compute-1 sudo[83840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:05 compute-1 sudo[83840]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:05 compute-1 sudo[83875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 06:51:05 compute-1 sudo[83875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:05 compute-1 podman[83939]: 2026-01-31 06:51:05.686836919 +0000 UTC m=+0.024915790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:51:05 compute-1 podman[83939]: 2026-01-31 06:51:05.803537458 +0000 UTC m=+0.141616309 container create a553ad98bab767cfecb4cab1cb8e16973c6d37f74c1b7bd631a3ad3b1aecf017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 06:51:05 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Jan 31 06:51:05 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 31 06:51:05 compute-1 ceph-mon[81728]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2555873024' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 06:51:05 compute-1 systemd[1]: Started libpod-conmon-a553ad98bab767cfecb4cab1cb8e16973c6d37f74c1b7bd631a3ad3b1aecf017.scope.
Jan 31 06:51:05 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:51:05 compute-1 podman[83939]: 2026-01-31 06:51:05.898401858 +0000 UTC m=+0.236480729 container init a553ad98bab767cfecb4cab1cb8e16973c6d37f74c1b7bd631a3ad3b1aecf017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_rubin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 06:51:05 compute-1 podman[83939]: 2026-01-31 06:51:05.90577529 +0000 UTC m=+0.243854131 container start a553ad98bab767cfecb4cab1cb8e16973c6d37f74c1b7bd631a3ad3b1aecf017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 06:51:05 compute-1 competent_rubin[83955]: 167 167
Jan 31 06:51:05 compute-1 systemd[1]: libpod-a553ad98bab767cfecb4cab1cb8e16973c6d37f74c1b7bd631a3ad3b1aecf017.scope: Deactivated successfully.
Jan 31 06:51:05 compute-1 conmon[83955]: conmon a553ad98bab767cfecb4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a553ad98bab767cfecb4cab1cb8e16973c6d37f74c1b7bd631a3ad3b1aecf017.scope/container/memory.events
Jan 31 06:51:05 compute-1 podman[83939]: 2026-01-31 06:51:05.919279282 +0000 UTC m=+0.257358163 container attach a553ad98bab767cfecb4cab1cb8e16973c6d37f74c1b7bd631a3ad3b1aecf017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_rubin, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 06:51:05 compute-1 podman[83939]: 2026-01-31 06:51:05.91995799 +0000 UTC m=+0.258036841 container died a553ad98bab767cfecb4cab1cb8e16973c6d37f74c1b7bd631a3ad3b1aecf017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_rubin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:51:05 compute-1 systemd[1]: var-lib-containers-storage-overlay-41233b5b61e5a317bcdf39ca65e9c4403689ce5e7240738842cf599364989aa8-merged.mount: Deactivated successfully.
Jan 31 06:51:05 compute-1 podman[83939]: 2026-01-31 06:51:05.986165944 +0000 UTC m=+0.324244795 container remove a553ad98bab767cfecb4cab1cb8e16973c6d37f74c1b7bd631a3ad3b1aecf017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Jan 31 06:51:05 compute-1 systemd[1]: libpod-conmon-a553ad98bab767cfecb4cab1cb8e16973c6d37f74c1b7bd631a3ad3b1aecf017.scope: Deactivated successfully.
Jan 31 06:51:06 compute-1 systemd[1]: Reloading.
Jan 31 06:51:06 compute-1 ceph-mon[81728]: 4.18 scrub starts
Jan 31 06:51:06 compute-1 ceph-mon[81728]: 4.18 scrub ok
Jan 31 06:51:06 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:06 compute-1 ceph-mon[81728]: mds.? [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] up:active
Jan 31 06:51:06 compute-1 ceph-mon[81728]: mds.? [v2:192.168.122.100:6806/3481669750,v1:192.168.122.100:6807/3481669750] up:boot
Jan 31 06:51:06 compute-1 ceph-mon[81728]: fsmap cephfs:1 {0=cephfs.compute-2.wcykmw=up:active} 1 up:standby
Jan 31 06:51:06 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.kanoes"}]: dispatch
Jan 31 06:51:06 compute-1 ceph-mon[81728]: fsmap cephfs:1 {0=cephfs.compute-2.wcykmw=up:active} 1 up:standby
Jan 31 06:51:06 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:06 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.hhzmle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 06:51:06 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.hhzmle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 06:51:06 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:51:06 compute-1 ceph-mon[81728]: Deploying daemon mds.cephfs.compute-1.hhzmle on compute-1
Jan 31 06:51:06 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:06 compute-1 ceph-mon[81728]: osdmap e48: 3 total, 3 up, 3 in
Jan 31 06:51:06 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/891061618' entity='client.rgw.rgw.compute-0.ibblfd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 06:51:06 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/2282082505' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 06:51:06 compute-1 ceph-mon[81728]: from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 06:51:06 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2555873024' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 06:51:06 compute-1 ceph-mon[81728]: from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 06:51:06 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 06:51:06 compute-1 systemd-sysv-generator[84004]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:51:06 compute-1 systemd-rc-local-generator[84001]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:51:06 compute-1 systemd[1]: Reloading.
Jan 31 06:51:06 compute-1 systemd-rc-local-generator[84036]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:51:06 compute-1 systemd-sysv-generator[84041]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:51:06 compute-1 systemd[1]: Starting Ceph mds.cephfs.compute-1.hhzmle for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a...
Jan 31 06:51:06 compute-1 podman[84101]: 2026-01-31 06:51:06.805807149 +0000 UTC m=+0.047260202 container create af37a72e1bd38bf7ea0e8afed2c0061ef1f2d8de9c6f0dd5e381b4a3f47fc487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mds-cephfs-compute-1-hhzmle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 06:51:06 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Jan 31 06:51:06 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 31 06:51:06 compute-1 ceph-mon[81728]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2555873024' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 06:51:06 compute-1 podman[84101]: 2026-01-31 06:51:06.777209704 +0000 UTC m=+0.018662777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:51:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598dab6e09d5d7230f2885c0c9dd97bd30408534287520263b3b5483a3d5857d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:51:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598dab6e09d5d7230f2885c0c9dd97bd30408534287520263b3b5483a3d5857d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:51:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598dab6e09d5d7230f2885c0c9dd97bd30408534287520263b3b5483a3d5857d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:51:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/598dab6e09d5d7230f2885c0c9dd97bd30408534287520263b3b5483a3d5857d/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.hhzmle supports timestamps until 2038 (0x7fffffff)
Jan 31 06:51:06 compute-1 podman[84101]: 2026-01-31 06:51:06.897512367 +0000 UTC m=+0.138965450 container init af37a72e1bd38bf7ea0e8afed2c0061ef1f2d8de9c6f0dd5e381b4a3f47fc487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mds-cephfs-compute-1-hhzmle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 06:51:06 compute-1 podman[84101]: 2026-01-31 06:51:06.901297756 +0000 UTC m=+0.142750809 container start af37a72e1bd38bf7ea0e8afed2c0061ef1f2d8de9c6f0dd5e381b4a3f47fc487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mds-cephfs-compute-1-hhzmle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 06:51:06 compute-1 bash[84101]: af37a72e1bd38bf7ea0e8afed2c0061ef1f2d8de9c6f0dd5e381b4a3f47fc487
Jan 31 06:51:06 compute-1 systemd[1]: Started Ceph mds.cephfs.compute-1.hhzmle for ef73c6e0-6d85-55c2-9347-1f544d3e3d3a.
Jan 31 06:51:06 compute-1 ceph-mds[84120]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 06:51:06 compute-1 ceph-mds[84120]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Jan 31 06:51:06 compute-1 ceph-mds[84120]: main not setting numa affinity
Jan 31 06:51:06 compute-1 ceph-mds[84120]: pidfile_write: ignore empty --pid-file
Jan 31 06:51:06 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-mds-cephfs-compute-1-hhzmle[84116]: starting mds.cephfs.compute-1.hhzmle at 
Jan 31 06:51:06 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle Updating MDS map to version 6 from mon.2
Jan 31 06:51:06 compute-1 sudo[83875]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:07 compute-1 ceph-mon[81728]: pgmap v140: 197 pgs: 2 unknown, 1 active+clean+laggy, 194 active+clean; 451 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Jan 31 06:51:07 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/891061618' entity='client.rgw.rgw.compute-0.ibblfd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 06:51:07 compute-1 ceph-mon[81728]: from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 06:51:07 compute-1 ceph-mon[81728]: from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 06:51:07 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 31 06:51:07 compute-1 ceph-mon[81728]: osdmap e49: 3 total, 3 up, 3 in
Jan 31 06:51:07 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/891061618' entity='client.rgw.rgw.compute-0.ibblfd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 06:51:07 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 06:51:07 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2555873024' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 06:51:07 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/2282082505' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 06:51:07 compute-1 ceph-mon[81728]: from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 06:51:07 compute-1 ceph-mon[81728]: from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 06:51:07 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:07 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:07 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Jan 31 06:51:08 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).mds e7 new map
Jan 31 06:51:08 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).mds e7 print_map
                                           e7
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T06:50:15.676838+0000
                                           modified        2026-01-31T06:51:04.968424+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24139}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.wcykmw{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.kanoes{-1:14361} state up:standby seq 1 addr [v2:192.168.122.100:6806/3481669750,v1:192.168.122.100:6807/3481669750] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.hhzmle{-1:24137} state up:standby seq 1 addr [v2:192.168.122.101:6804/1691342288,v1:192.168.122.101:6805/1691342288] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 06:51:08 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle Updating MDS map to version 7 from mon.2
Jan 31 06:51:08 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle Monitors have assigned me to become a standby.
Jan 31 06:51:08 compute-1 ceph-mon[81728]: 6.1c scrub starts
Jan 31 06:51:08 compute-1 ceph-mon[81728]: 6.1c scrub ok
Jan 31 06:51:08 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:08 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:08 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:08 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:08 compute-1 ceph-mon[81728]: Deploying daemon haproxy.rgw.default.compute-0.dsjekd on compute-0
Jan 31 06:51:08 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/891061618' entity='client.rgw.rgw.compute-0.ibblfd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 06:51:08 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 31 06:51:08 compute-1 ceph-mon[81728]: from='client.? ' entity='client.rgw.rgw.compute-2.fbgckm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 06:51:08 compute-1 ceph-mon[81728]: from='client.? ' entity='client.rgw.rgw.compute-1.izlkft' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 06:51:08 compute-1 ceph-mon[81728]: osdmap e50: 3 total, 3 up, 3 in
Jan 31 06:51:08 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 06:51:08 compute-1 ceph-mon[81728]: mds.? [v2:192.168.122.101:6804/1691342288,v1:192.168.122.101:6805/1691342288] up:boot
Jan 31 06:51:08 compute-1 ceph-mon[81728]: fsmap cephfs:1 {0=cephfs.compute-2.wcykmw=up:active} 2 up:standby
Jan 31 06:51:08 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.hhzmle"}]: dispatch
Jan 31 06:51:08 compute-1 radosgw[83730]: LDAP not started since no server URIs were provided in the configuration.
Jan 31 06:51:08 compute-1 ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-rgw-rgw-compute-1-izlkft[83726]: 2026-01-31T06:51:08.742+0000 7f4b17d58940 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 31 06:51:08 compute-1 radosgw[83730]: framework: beast
Jan 31 06:51:08 compute-1 radosgw[83730]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 31 06:51:08 compute-1 radosgw[83730]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 31 06:51:08 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:51:08 compute-1 radosgw[83730]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 31 06:51:08 compute-1 radosgw[83730]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 31 06:51:08 compute-1 radosgw[83730]: starting handler: beast
Jan 31 06:51:08 compute-1 radosgw[83730]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 06:51:08 compute-1 radosgw[83730]: mgrc service_daemon_register rgw.24131 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.izlkft,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864292,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=5b2cd03f-b7da-4851-9803-ae95ec73332f,zone_name=default,zonegroup_id=496e4318-7ebf-46fe-91cf-296e443f34ee,zonegroup_name=default}
Jan 31 06:51:08 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Jan 31 06:51:09 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).mds e8 new map
Jan 31 06:51:09 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).mds e8 print_map
                                           e8
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T06:50:15.676838+0000
                                           modified        2026-01-31T06:51:09.005628+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24139}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.wcykmw{0:24139} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.kanoes{-1:14361} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3481669750,v1:192.168.122.100:6807/3481669750] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.hhzmle{-1:24137} state up:standby seq 1 addr [v2:192.168.122.101:6804/1691342288,v1:192.168.122.101:6805/1691342288] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 06:51:09 compute-1 ceph-mon[81728]: pgmap v143: 197 pgs: 1 unknown, 1 active+clean+laggy, 195 active+clean; 451 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 511 B/s wr, 2 op/s
Jan 31 06:51:09 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 06:51:09 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 06:51:09 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 31 06:51:09 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 06:51:09 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 06:51:09 compute-1 ceph-mon[81728]: osdmap e51: 3 total, 3 up, 3 in
Jan 31 06:51:09 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 06:51:09 compute-1 ceph-mon[81728]: mds.? [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] up:active
Jan 31 06:51:09 compute-1 ceph-mon[81728]: mds.? [v2:192.168.122.100:6806/3481669750,v1:192.168.122.100:6807/3481669750] up:standby
Jan 31 06:51:09 compute-1 ceph-mon[81728]: fsmap cephfs:1 {0=cephfs.compute-2.wcykmw=up:active} 2 up:standby
Jan 31 06:51:10 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Jan 31 06:51:10 compute-1 ceph-mon[81728]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 31 06:51:10 compute-1 ceph-mon[81728]: Cluster is now healthy
Jan 31 06:51:10 compute-1 ceph-mon[81728]: 5.6 scrub starts
Jan 31 06:51:10 compute-1 ceph-mon[81728]: 5.6 scrub ok
Jan 31 06:51:10 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 06:51:10 compute-1 ceph-mon[81728]: osdmap e52: 3 total, 3 up, 3 in
Jan 31 06:51:11 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Jan 31 06:51:11 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 53 pg[10.0( v 47'48 (0'0,47'48] local-lis/les=46/47 n=8 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=9.691188812s) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 47'47 mlcod 47'47 active pruub 131.225112915s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:11 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 53 pg[10.0( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=9.691188812s) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 47'47 mlcod 0'0 unknown pruub 131.225112915s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:11 compute-1 ceph-mon[81728]: pgmap v146: 259 pgs: 1 active+clean+laggy, 62 unknown, 196 active+clean; 455 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 8.0 KiB/s wr, 31 op/s
Jan 31 06:51:11 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 06:51:11 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 06:51:11 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:11 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:11 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:11 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:11 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).mds e9 new map
Jan 31 06:51:11 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).mds e9 print_map
                                           e9
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T06:50:15.676838+0000
                                           modified        2026-01-31T06:51:09.005628+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24139}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.wcykmw{0:24139} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2665570797,v1:192.168.122.102:6805/2665570797] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.kanoes{-1:14361} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3481669750,v1:192.168.122.100:6807/3481669750] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.hhzmle{-1:24137} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1691342288,v1:192.168.122.101:6805/1691342288] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 06:51:11 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle Updating MDS map to version 9 from mon.2
Jan 31 06:51:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.002000052s ======
Jan 31 06:51:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:11.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 31 06:51:12 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.1b( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.11( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.7( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=1 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.12( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.10( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.1f( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.1e( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.1d( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.1c( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.1a( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.6( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=1 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.18( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.4( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=1 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.19( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.3( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=1 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.b( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.5( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=1 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.d( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.8( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=1 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.9( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.a( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.c( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.e( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.1( v 47'48 (0'0,47'48] local-lis/les=46/47 n=1 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.f( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.2( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=1 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.13( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.14( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.15( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.16( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.17( v 47'48 lc 0'0 (0'0,47'48] local-lis/les=46/47 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.1b( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.7( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.12( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.10( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.1f( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.1e( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.11( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.1d( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.1a( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.1c( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.6( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.18( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.4( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.b( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.5( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.3( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.8( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.d( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.9( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.a( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.c( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.19( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.e( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.0( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 47'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.1( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.f( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.2( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.14( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.15( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.16( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.17( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 54 pg[10.13( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=46/46 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[46,53)/1 crt=47'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:12 compute-1 ceph-mon[81728]: 4.1f deep-scrub starts
Jan 31 06:51:12 compute-1 ceph-mon[81728]: 4.1f deep-scrub ok
Jan 31 06:51:12 compute-1 ceph-mon[81728]: Deploying daemon haproxy.rgw.default.compute-2.wrxlmw on compute-2
Jan 31 06:51:12 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 06:51:12 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 06:51:12 compute-1 ceph-mon[81728]: osdmap e53: 3 total, 3 up, 3 in
Jan 31 06:51:12 compute-1 ceph-mon[81728]: mds.? [v2:192.168.122.101:6804/1691342288,v1:192.168.122.101:6805/1691342288] up:standby
Jan 31 06:51:12 compute-1 ceph-mon[81728]: fsmap cephfs:1 {0=cephfs.compute-2.wcykmw=up:active} 2 up:standby
Jan 31 06:51:13 compute-1 ceph-mon[81728]: osdmap e54: 3 total, 3 up, 3 in
Jan 31 06:51:13 compute-1 ceph-mon[81728]: pgmap v149: 321 pgs: 1 active+clean+laggy, 124 unknown, 196 active+clean; 455 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 7.5 KiB/s wr, 28 op/s
Jan 31 06:51:13 compute-1 ceph-mon[81728]: 5.a scrub starts
Jan 31 06:51:13 compute-1 ceph-mon[81728]: 5.a scrub ok
Jan 31 06:51:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:13.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:13 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:51:14 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Jan 31 06:51:14 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Jan 31 06:51:14 compute-1 ceph-mon[81728]: 4.15 scrub starts
Jan 31 06:51:14 compute-1 ceph-mon[81728]: 4.15 scrub ok
Jan 31 06:51:14 compute-1 ceph-mon[81728]: pgmap v150: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 250 KiB/s rd, 3.0 KiB/s wr, 447 op/s
Jan 31 06:51:14 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 06:51:14 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 06:51:14 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 06:51:14 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 06:51:14 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.1b( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.489240646s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 138.589584351s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.11( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.501080513s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 138.601394653s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.10( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500678062s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 138.601074219s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.12( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500636101s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 138.600860596s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.12( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500381470s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.600860596s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.1b( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.489098549s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.589584351s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.11( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500885963s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.601394653s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.1e( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500762939s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 138.601348877s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.1e( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500701904s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.601348877s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.10( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500335693s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.601074219s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.18( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500555038s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 138.601531982s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.19( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500761986s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 138.601791382s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.18( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500530243s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.601531982s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.19( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500735283s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.601791382s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.5( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500529289s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 138.601654053s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.5( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500465393s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.601654053s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.3( v 54'51 (0'0,54'51] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500430107s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 54'50 mlcod 54'50 active pruub 138.601654053s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.4( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500385284s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 138.601593018s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.4( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500341415s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.601593018s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.3( v 54'51 (0'0,54'51] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500389099s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 54'50 mlcod 0'0 unknown NOTIFY pruub 138.601654053s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.8( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500304222s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 138.601669312s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.8( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500269890s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.601669312s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.f( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500207901s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 138.601867676s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.f( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500172615s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.601867676s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.1( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500020027s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 138.601852417s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.2( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500019073s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 138.601928711s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.13( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500654221s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 active pruub 138.602615356s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.13( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.500630379s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.602615356s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.1( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.499991417s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.601852417s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.14( v 54'51 (0'0,54'51] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.499653816s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 54'50 mlcod 54'50 active pruub 138.601943970s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.2( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.499943733s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.601928711s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.14( v 54'51 (0'0,54'51] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.499599457s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 54'50 mlcod 0'0 unknown NOTIFY pruub 138.601943970s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.15( v 54'51 (0'0,54'51] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.499620438s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 54'50 mlcod 54'50 active pruub 138.601989746s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[10.15( v 54'51 (0'0,54'51] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=13.499567032s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 54'50 mlcod 0'0 unknown NOTIFY pruub 138.601989746s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[11.12( empty local-lis/les=0/0 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[8.14( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[11.14( empty local-lis/les=0/0 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[8.10( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[11.1( empty local-lis/les=0/0 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[8.17( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[8.8( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[11.f( empty local-lis/les=0/0 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[11.4( empty local-lis/les=0/0 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[11.5( empty local-lis/les=0/0 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[8.4( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[8.1b( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[11.7( empty local-lis/les=0/0 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[8.19( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[11.1b( empty local-lis/les=0/0 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[11.1a( empty local-lis/les=0/0 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[8.18( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[11.1d( empty local-lis/les=0/0 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[11.1c( empty local-lis/les=0/0 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[11.1e( empty local-lis/les=0/0 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 55 pg[8.12( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:15 compute-1 ceph-mon[81728]: 6.1a scrub starts
Jan 31 06:51:15 compute-1 ceph-mon[81728]: 6.1a scrub ok
Jan 31 06:51:15 compute-1 ceph-mon[81728]: 5.c scrub starts
Jan 31 06:51:15 compute-1 ceph-mon[81728]: 5.c scrub ok
Jan 31 06:51:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 06:51:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 06:51:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 06:51:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 06:51:15 compute-1 ceph-mon[81728]: osdmap e55: 3 total, 3 up, 3 in
Jan 31 06:51:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:15 compute-1 ceph-mon[81728]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 06:51:15 compute-1 ceph-mon[81728]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 06:51:15 compute-1 ceph-mon[81728]: Deploying daemon keepalived.rgw.default.compute-0.kqakbv on compute-0
Jan 31 06:51:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:15.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:15 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[8.17( v 43'8 (0'0,43'8] local-lis/les=55/56 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[11.14( empty local-lis/les=55/56 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[11.1( empty local-lis/les=55/56 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[11.12( empty local-lis/les=55/56 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[11.f( empty local-lis/les=55/56 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[8.8( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=55/56 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'8 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[11.4( empty local-lis/les=55/56 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[8.14( v 43'8 (0'0,43'8] local-lis/les=55/56 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[11.7( empty local-lis/les=55/56 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[8.4( v 43'8 (0'0,43'8] local-lis/les=55/56 n=1 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[8.1b( v 43'8 (0'0,43'8] local-lis/les=55/56 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[8.18( v 43'8 (0'0,43'8] local-lis/les=55/56 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[11.1c( empty local-lis/les=55/56 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[11.5( empty local-lis/les=55/56 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[11.1d( empty local-lis/les=55/56 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[8.12( v 43'8 (0'0,43'8] local-lis/les=55/56 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[11.1b( empty local-lis/les=55/56 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[11.1e( empty local-lis/les=55/56 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[8.19( v 43'8 (0'0,43'8] local-lis/les=55/56 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[11.1a( empty local-lis/les=55/56 n=0 ec=53/48 lis/c=53/53 les/c/f=54/54/0 sis=55) [1] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[8.10( v 43'8 (0'0,43'8] local-lis/les=55/56 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:15.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:16 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 31 06:51:16 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 31 06:51:16 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Jan 31 06:51:16 compute-1 ceph-mon[81728]: 5.14 scrub starts
Jan 31 06:51:16 compute-1 ceph-mon[81728]: 5.14 scrub ok
Jan 31 06:51:16 compute-1 ceph-mon[81728]: osdmap e56: 3 total, 3 up, 3 in
Jan 31 06:51:16 compute-1 ceph-mon[81728]: 3.1c scrub starts
Jan 31 06:51:16 compute-1 ceph-mon[81728]: 3.1c scrub ok
Jan 31 06:51:16 compute-1 ceph-mon[81728]: pgmap v153: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 257 KiB/s rd, 3.1 KiB/s wr, 459 op/s
Jan 31 06:51:16 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 31 06:51:17 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Jan 31 06:51:17 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Jan 31 06:51:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:17.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:17.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:17 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 06:51:17 compute-1 ceph-mon[81728]: osdmap e57: 3 total, 3 up, 3 in
Jan 31 06:51:17 compute-1 ceph-mon[81728]: 7.1f scrub starts
Jan 31 06:51:17 compute-1 ceph-mon[81728]: 7.1f scrub ok
Jan 31 06:51:17 compute-1 ceph-mon[81728]: 6.19 scrub starts
Jan 31 06:51:17 compute-1 ceph-mon[81728]: 6.19 scrub ok
Jan 31 06:51:18 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:51:18 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Jan 31 06:51:18 compute-1 ceph-mon[81728]: 6.1e scrub starts
Jan 31 06:51:18 compute-1 ceph-mon[81728]: 6.1e scrub ok
Jan 31 06:51:18 compute-1 ceph-mon[81728]: pgmap v155: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 220 KiB/s rd, 2.7 KiB/s wr, 393 op/s; 145 B/s, 0 objects/s recovering
Jan 31 06:51:18 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 31 06:51:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:51:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:19.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:51:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:19.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:21 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 06:51:21 compute-1 ceph-mon[81728]: osdmap e58: 3 total, 3 up, 3 in
Jan 31 06:51:21 compute-1 ceph-mon[81728]: 3.11 scrub starts
Jan 31 06:51:21 compute-1 ceph-mon[81728]: 3.11 scrub ok
Jan 31 06:51:21 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Jan 31 06:51:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:21.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:21.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:22 compute-1 ceph-mon[81728]: 2.12 scrub starts
Jan 31 06:51:22 compute-1 ceph-mon[81728]: 2.12 scrub ok
Jan 31 06:51:22 compute-1 ceph-mon[81728]: pgmap v157: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s; 480 B/s, 1 objects/s recovering
Jan 31 06:51:22 compute-1 ceph-mon[81728]: 5.17 scrub starts
Jan 31 06:51:22 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 31 06:51:22 compute-1 ceph-mon[81728]: 5.17 scrub ok
Jan 31 06:51:22 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 06:51:22 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:22 compute-1 ceph-mon[81728]: osdmap e59: 3 total, 3 up, 3 in
Jan 31 06:51:22 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:22 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:22 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:22 compute-1 ceph-mon[81728]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 06:51:22 compute-1 ceph-mon[81728]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 06:51:22 compute-1 ceph-mon[81728]: Deploying daemon keepalived.rgw.default.compute-2.rcppiv on compute-2
Jan 31 06:51:22 compute-1 ceph-mon[81728]: pgmap v159: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s; 446 B/s, 1 objects/s recovering
Jan 31 06:51:22 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 31 06:51:22 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Jan 31 06:51:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:23.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:23 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:51:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:23.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:23 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 06:51:23 compute-1 ceph-mon[81728]: osdmap e60: 3 total, 3 up, 3 in
Jan 31 06:51:23 compute-1 ceph-mon[81728]: 4.9 deep-scrub starts
Jan 31 06:51:23 compute-1 ceph-mon[81728]: 4.9 deep-scrub ok
Jan 31 06:51:23 compute-1 sshd-session[84683]: Received disconnect from 45.148.10.141 port 58432:11:  [preauth]
Jan 31 06:51:23 compute-1 sshd-session[84683]: Disconnected from authenticating user root 45.148.10.141 port 58432 [preauth]
Jan 31 06:51:23 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Jan 31 06:51:24 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Jan 31 06:51:24 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 31 06:51:24 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 31 06:51:24 compute-1 ceph-mon[81728]: 5.19 scrub starts
Jan 31 06:51:24 compute-1 ceph-mon[81728]: 5.19 scrub ok
Jan 31 06:51:24 compute-1 ceph-mon[81728]: osdmap e61: 3 total, 3 up, 3 in
Jan 31 06:51:24 compute-1 ceph-mon[81728]: osdmap e62: 3 total, 3 up, 3 in
Jan 31 06:51:24 compute-1 ceph-mon[81728]: 5.1b scrub starts
Jan 31 06:51:24 compute-1 ceph-mon[81728]: 5.1b scrub ok
Jan 31 06:51:24 compute-1 ceph-mon[81728]: pgmap v163: 321 pgs: 4 unknown, 8 remapped+peering, 1 active+clean+laggy, 308 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:51:25 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Jan 31 06:51:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:25.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:25.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:26 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:26 compute-1 ceph-mon[81728]: osdmap e63: 3 total, 3 up, 3 in
Jan 31 06:51:26 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Jan 31 06:51:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:27 compute-1 ceph-mon[81728]: pgmap v166: 321 pgs: 4 unknown, 8 remapped+peering, 1 active+clean+laggy, 308 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:51:27 compute-1 ceph-mon[81728]: osdmap e64: 3 total, 3 up, 3 in
Jan 31 06:51:27 compute-1 ceph-mon[81728]: 3.e deep-scrub starts
Jan 31 06:51:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:27 compute-1 ceph-mon[81728]: 3.e deep-scrub ok
Jan 31 06:51:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:51:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:27.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:51:27 compute-1 sudo[84685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:51:27 compute-1 sudo[84685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:27 compute-1 sudo[84685]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:27 compute-1 sudo[84710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:51:27 compute-1 sudo[84710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:27 compute-1 sudo[84710]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:51:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:27.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:51:27 compute-1 sudo[84735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:51:27 compute-1 sudo[84735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:27 compute-1 sudo[84735]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:27 compute-1 sudo[84760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:51:27 compute-1 sudo[84760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:27 compute-1 sudo[84760]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:27 compute-1 sudo[84785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:51:27 compute-1 sudo[84785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:27 compute-1 sudo[84785]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:28 compute-1 sudo[84810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 06:51:28 compute-1 sudo[84810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:28 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 31 06:51:28 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 31 06:51:28 compute-1 podman[84905]: 2026-01-31 06:51:28.426763759 +0000 UTC m=+0.044501446 container exec 6672f2dbf6180479da9a6d4a3be2a0622c308e279fdc39713e442740f41af4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 06:51:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:28 compute-1 ceph-mon[81728]: pgmap v167: 321 pgs: 4 unknown, 8 remapped+peering, 1 active+clean+laggy, 308 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 477 B/s rd, 0 op/s
Jan 31 06:51:28 compute-1 podman[84905]: 2026-01-31 06:51:28.52732507 +0000 UTC m=+0.145062727 container exec_died 6672f2dbf6180479da9a6d4a3be2a0622c308e279fdc39713e442740f41af4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 06:51:28 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:51:28 compute-1 sudo[84810]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:29 compute-1 ceph-mon[81728]: 5.1c scrub starts
Jan 31 06:51:29 compute-1 ceph-mon[81728]: 5.1c scrub ok
Jan 31 06:51:29 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:29 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:29 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:29 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:29 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:29.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:29.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:30 compute-1 ceph-mon[81728]: 5.1d scrub starts
Jan 31 06:51:30 compute-1 ceph-mon[81728]: 5.1d scrub ok
Jan 31 06:51:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:51:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 06:51:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 06:51:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 06:51:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:51:30 compute-1 ceph-mon[81728]: pgmap v168: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 998 B/s wr, 57 op/s; 293 B/s, 10 objects/s recovering
Jan 31 06:51:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 31 06:51:31 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Jan 31 06:51:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:31.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:31 compute-1 ceph-mon[81728]: 5.1e scrub starts
Jan 31 06:51:31 compute-1 ceph-mon[81728]: 5.1e scrub ok
Jan 31 06:51:31 compute-1 ceph-mon[81728]: 6.a scrub starts
Jan 31 06:51:31 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:31 compute-1 ceph-mon[81728]: 6.a scrub ok
Jan 31 06:51:31 compute-1 ceph-mon[81728]: Health check failed: 1 slow ops, oldest one blocked for 33 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:51:31 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 06:51:31 compute-1 ceph-mon[81728]: osdmap e65: 3 total, 3 up, 3 in
Jan 31 06:51:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:31.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:32 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:32 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:32 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:32 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:32 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Jan 31 06:51:32 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:32 compute-1 ceph-mon[81728]: pgmap v170: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 870 B/s wr, 50 op/s; 255 B/s, 8 objects/s recovering
Jan 31 06:51:32 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 31 06:51:32 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:32 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:32 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:32 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:32 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:32 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:32 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:32 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:33 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 31 06:51:33 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 31 06:51:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:33.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:33 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Jan 31 06:51:33 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:51:33 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 06:51:33 compute-1 ceph-mon[81728]: osdmap e66: 3 total, 3 up, 3 in
Jan 31 06:51:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:33 compute-1 ceph-mon[81728]: 4.1b scrub starts
Jan 31 06:51:33 compute-1 ceph-mon[81728]: 4.1b scrub ok
Jan 31 06:51:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:33.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:34 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.1a deep-scrub starts
Jan 31 06:51:34 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.1a deep-scrub ok
Jan 31 06:51:34 compute-1 ceph-mon[81728]: osdmap e67: 3 total, 3 up, 3 in
Jan 31 06:51:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:34 compute-1 ceph-mon[81728]: 4.1a deep-scrub starts
Jan 31 06:51:34 compute-1 ceph-mon[81728]: 4.1a deep-scrub ok
Jan 31 06:51:34 compute-1 ceph-mon[81728]: pgmap v173: 321 pgs: 1 active+clean+laggy, 4 unknown, 316 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 5.8 KiB/s rd, 341 B/s wr, 8 op/s; 300 B/s, 10 objects/s recovering
Jan 31 06:51:34 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Jan 31 06:51:34 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 luod=0'0 crt=54'450 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:34 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 luod=0'0 crt=52'438 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:34 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 luod=0'0 crt=53'453 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:34 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:34 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:34 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:34 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 luod=0'0 crt=54'458 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:34 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:35.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:35 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Jan 31 06:51:35 compute-1 ceph-mon[81728]: osdmap e68: 3 total, 3 up, 3 in
Jan 31 06:51:35 compute-1 ceph-mon[81728]: 4.8 scrub starts
Jan 31 06:51:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:35 compute-1 ceph-mon[81728]: 4.8 scrub ok
Jan 31 06:51:35 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:35 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:35 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=68/69 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:35 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=68/69 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:35 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:35.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:35 compute-1 sudo[85031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:51:35 compute-1 sudo[85031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:35 compute-1 sudo[85031]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:35 compute-1 sudo[85056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:51:35 compute-1 sudo[85056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:35 compute-1 sudo[85056]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:36 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:36 compute-1 ceph-mon[81728]: osdmap e69: 3 total, 3 up, 3 in
Jan 31 06:51:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:36 compute-1 ceph-mon[81728]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 31 06:51:36 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 06:51:36 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 06:51:36 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:51:36 compute-1 ceph-mon[81728]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 06:51:36 compute-1 ceph-mon[81728]: 6.4 scrub starts
Jan 31 06:51:36 compute-1 ceph-mon[81728]: 6.4 scrub ok
Jan 31 06:51:36 compute-1 ceph-mon[81728]: pgmap v176: 321 pgs: 1 active+clean+laggy, 4 unknown, 316 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:51:36 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:36 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:36 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:36 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.gghdjs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 06:51:36 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 06:51:36 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:51:37 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 31 06:51:37 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 31 06:51:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:37.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:51:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:37.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:51:37 compute-1 ceph-mon[81728]: Reconfiguring mgr.compute-0.gghdjs (monmap changed)...
Jan 31 06:51:37 compute-1 ceph-mon[81728]: Reconfiguring daemon mgr.compute-0.gghdjs on compute-0
Jan 31 06:51:37 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:37 compute-1 ceph-mon[81728]: 4.c scrub starts
Jan 31 06:51:37 compute-1 ceph-mon[81728]: 4.c scrub ok
Jan 31 06:51:37 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:37 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:37 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 06:51:37 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:51:38 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:51:38 compute-1 sudo[85081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:51:38 compute-1 sudo[85081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:38 compute-1 sudo[85081]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:38 compute-1 sudo[85106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:51:38 compute-1 sudo[85106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:38 compute-1 sudo[85106]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:38 compute-1 sudo[85131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:51:38 compute-1 sudo[85131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:38 compute-1 sudo[85131]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:38 compute-1 sudo[85156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 06:51:38 compute-1 sudo[85156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:39 compute-1 ceph-mon[81728]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 31 06:51:39 compute-1 ceph-mon[81728]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 31 06:51:39 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:39 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:39 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:39 compute-1 ceph-mon[81728]: Reconfiguring osd.0 (monmap changed)...
Jan 31 06:51:39 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 31 06:51:39 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:51:39 compute-1 ceph-mon[81728]: Reconfiguring daemon osd.0 on compute-0
Jan 31 06:51:39 compute-1 ceph-mon[81728]: pgmap v177: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 742 B/s wr, 54 op/s; 39 B/s, 3 objects/s recovering
Jan 31 06:51:39 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 31 06:51:39 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:39 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:39 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 06:51:39 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:51:39 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Jan 31 06:51:39 compute-1 podman[85198]: 2026-01-31 06:51:39.182210135 +0000 UTC m=+0.021072847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:51:39 compute-1 podman[85198]: 2026-01-31 06:51:39.362973681 +0000 UTC m=+0.201836363 container create 2df8820a616552a04ddc452dc1aee3a3003b10aa649beca1e828e72f8d0ce7d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dhawan, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 06:51:39 compute-1 systemd[1]: Started libpod-conmon-2df8820a616552a04ddc452dc1aee3a3003b10aa649beca1e828e72f8d0ce7d1.scope.
Jan 31 06:51:39 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:51:39 compute-1 podman[85198]: 2026-01-31 06:51:39.575414628 +0000 UTC m=+0.414277410 container init 2df8820a616552a04ddc452dc1aee3a3003b10aa649beca1e828e72f8d0ce7d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 06:51:39 compute-1 podman[85198]: 2026-01-31 06:51:39.582060936 +0000 UTC m=+0.420923648 container start 2df8820a616552a04ddc452dc1aee3a3003b10aa649beca1e828e72f8d0ce7d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dhawan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 06:51:39 compute-1 eager_dhawan[85214]: 167 167
Jan 31 06:51:39 compute-1 systemd[1]: libpod-2df8820a616552a04ddc452dc1aee3a3003b10aa649beca1e828e72f8d0ce7d1.scope: Deactivated successfully.
Jan 31 06:51:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:39.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:39 compute-1 podman[85198]: 2026-01-31 06:51:39.783949989 +0000 UTC m=+0.622812811 container attach 2df8820a616552a04ddc452dc1aee3a3003b10aa649beca1e828e72f8d0ce7d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 06:51:39 compute-1 podman[85198]: 2026-01-31 06:51:39.785184602 +0000 UTC m=+0.624047324 container died 2df8820a616552a04ddc452dc1aee3a3003b10aa649beca1e828e72f8d0ce7d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dhawan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 06:51:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:39.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:39 compute-1 systemd[1]: var-lib-containers-storage-overlay-86b91721dab4f19aeb73be1347222b45de61deb79c09bc4c18259b8a1c36042e-merged.mount: Deactivated successfully.
Jan 31 06:51:40 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 31 06:51:40 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 31 06:51:40 compute-1 ceph-mon[81728]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 31 06:51:40 compute-1 ceph-mon[81728]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 31 06:51:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:40 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 38 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:51:40 compute-1 ceph-mon[81728]: 6.6 deep-scrub starts
Jan 31 06:51:40 compute-1 ceph-mon[81728]: 6.6 deep-scrub ok
Jan 31 06:51:40 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 06:51:40 compute-1 ceph-mon[81728]: osdmap e70: 3 total, 3 up, 3 in
Jan 31 06:51:40 compute-1 podman[85198]: 2026-01-31 06:51:40.271140525 +0000 UTC m=+1.110003207 container remove 2df8820a616552a04ddc452dc1aee3a3003b10aa649beca1e828e72f8d0ce7d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dhawan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:51:40 compute-1 systemd[1]: libpod-conmon-2df8820a616552a04ddc452dc1aee3a3003b10aa649beca1e828e72f8d0ce7d1.scope: Deactivated successfully.
Jan 31 06:51:40 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Jan 31 06:51:40 compute-1 sudo[85156]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:41 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 31 06:51:41 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 31 06:51:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:41.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:41 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:41 compute-1 ceph-mon[81728]: 6.e scrub starts
Jan 31 06:51:41 compute-1 ceph-mon[81728]: 6.e scrub ok
Jan 31 06:51:41 compute-1 ceph-mon[81728]: pgmap v179: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 682 B/s wr, 50 op/s; 36 B/s, 3 objects/s recovering
Jan 31 06:51:41 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 31 06:51:41 compute-1 ceph-mon[81728]: osdmap e71: 3 total, 3 up, 3 in
Jan 31 06:51:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:41.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:42 compute-1 sudo[85234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:51:42 compute-1 sudo[85234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:42 compute-1 sudo[85234]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:42 compute-1 sudo[85259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:51:42 compute-1 sudo[85259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:42 compute-1 sudo[85259]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:42 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Jan 31 06:51:42 compute-1 sudo[85284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:51:42 compute-1 sudo[85284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:42 compute-1 sudo[85284]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:42 compute-1 sudo[85309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 06:51:42 compute-1 sudo[85309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:42 compute-1 podman[85350]: 2026-01-31 06:51:42.47363686 +0000 UTC m=+0.020360958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:51:42 compute-1 podman[85350]: 2026-01-31 06:51:42.604198277 +0000 UTC m=+0.150922345 container create eb35f43fc629a45eed22d2cc7d032f99d3579e9e2a35944414e7646f1b2586ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 06:51:42 compute-1 systemd[1]: Started libpod-conmon-eb35f43fc629a45eed22d2cc7d032f99d3579e9e2a35944414e7646f1b2586ab.scope.
Jan 31 06:51:42 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:51:42 compute-1 podman[85350]: 2026-01-31 06:51:42.670056226 +0000 UTC m=+0.216780344 container init eb35f43fc629a45eed22d2cc7d032f99d3579e9e2a35944414e7646f1b2586ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 06:51:42 compute-1 podman[85350]: 2026-01-31 06:51:42.675642726 +0000 UTC m=+0.222366814 container start eb35f43fc629a45eed22d2cc7d032f99d3579e9e2a35944414e7646f1b2586ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:51:42 compute-1 podman[85350]: 2026-01-31 06:51:42.678936675 +0000 UTC m=+0.225660803 container attach eb35f43fc629a45eed22d2cc7d032f99d3579e9e2a35944414e7646f1b2586ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 06:51:42 compute-1 boring_bhaskara[85366]: 167 167
Jan 31 06:51:42 compute-1 systemd[1]: libpod-eb35f43fc629a45eed22d2cc7d032f99d3579e9e2a35944414e7646f1b2586ab.scope: Deactivated successfully.
Jan 31 06:51:42 compute-1 podman[85350]: 2026-01-31 06:51:42.680290911 +0000 UTC m=+0.227014989 container died eb35f43fc629a45eed22d2cc7d032f99d3579e9e2a35944414e7646f1b2586ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bhaskara, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:51:42 compute-1 systemd[1]: var-lib-containers-storage-overlay-3dd2544d5a1e22c9ca726507ac402e656c9fa6f108c2e38ddac92b8d59a9c129-merged.mount: Deactivated successfully.
Jan 31 06:51:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:42 compute-1 ceph-mon[81728]: 5.4 deep-scrub starts
Jan 31 06:51:42 compute-1 ceph-mon[81728]: 4.e scrub starts
Jan 31 06:51:42 compute-1 ceph-mon[81728]: 4.e scrub ok
Jan 31 06:51:42 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:42 compute-1 ceph-mon[81728]: 5.4 deep-scrub ok
Jan 31 06:51:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:42 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 06:51:42 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:42 compute-1 ceph-mon[81728]: osdmap e72: 3 total, 3 up, 3 in
Jan 31 06:51:42 compute-1 ceph-mon[81728]: Reconfiguring osd.1 (monmap changed)...
Jan 31 06:51:42 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 31 06:51:42 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:51:42 compute-1 ceph-mon[81728]: Reconfiguring daemon osd.1 on compute-1
Jan 31 06:51:42 compute-1 ceph-mon[81728]: pgmap v182: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 682 B/s wr, 50 op/s; 36 B/s, 3 objects/s recovering
Jan 31 06:51:42 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 31 06:51:42 compute-1 podman[85350]: 2026-01-31 06:51:42.724570281 +0000 UTC m=+0.271294349 container remove eb35f43fc629a45eed22d2cc7d032f99d3579e9e2a35944414e7646f1b2586ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 06:51:42 compute-1 systemd[1]: libpod-conmon-eb35f43fc629a45eed22d2cc7d032f99d3579e9e2a35944414e7646f1b2586ab.scope: Deactivated successfully.
Jan 31 06:51:42 compute-1 sudo[85309]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:42 compute-1 sudo[85390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:51:42 compute-1 sudo[85390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:42 compute-1 sudo[85390]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:42 compute-1 sudo[85415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:51:42 compute-1 sudo[85415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:42 compute-1 sudo[85415]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:42 compute-1 sudo[85440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:51:42 compute-1 sudo[85440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:42 compute-1 sudo[85440]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:43 compute-1 sudo[85465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
Jan 31 06:51:43 compute-1 sudo[85465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:43 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Jan 31 06:51:43 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:43 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:43 compute-1 podman[85505]: 2026-01-31 06:51:43.274339259 +0000 UTC m=+0.036232154 container create 0f5f5b8b5ba4fbe9e58712ebef90a9e5bbca9ef5b493600e1bbf5f73a060c4f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclaren, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 06:51:43 compute-1 systemd[1]: Started libpod-conmon-0f5f5b8b5ba4fbe9e58712ebef90a9e5bbca9ef5b493600e1bbf5f73a060c4f8.scope.
Jan 31 06:51:43 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:51:43 compute-1 podman[85505]: 2026-01-31 06:51:43.345610143 +0000 UTC m=+0.107502778 container init 0f5f5b8b5ba4fbe9e58712ebef90a9e5bbca9ef5b493600e1bbf5f73a060c4f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclaren, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:51:43 compute-1 podman[85505]: 2026-01-31 06:51:43.351639095 +0000 UTC m=+0.113531730 container start 0f5f5b8b5ba4fbe9e58712ebef90a9e5bbca9ef5b493600e1bbf5f73a060c4f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclaren, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:51:43 compute-1 podman[85505]: 2026-01-31 06:51:43.257385673 +0000 UTC m=+0.019278328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 06:51:43 compute-1 heuristic_mclaren[85521]: 167 167
Jan 31 06:51:43 compute-1 podman[85505]: 2026-01-31 06:51:43.355503749 +0000 UTC m=+0.117396414 container attach 0f5f5b8b5ba4fbe9e58712ebef90a9e5bbca9ef5b493600e1bbf5f73a060c4f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclaren, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 06:51:43 compute-1 systemd[1]: libpod-0f5f5b8b5ba4fbe9e58712ebef90a9e5bbca9ef5b493600e1bbf5f73a060c4f8.scope: Deactivated successfully.
Jan 31 06:51:43 compute-1 podman[85505]: 2026-01-31 06:51:43.35664535 +0000 UTC m=+0.118537985 container died 0f5f5b8b5ba4fbe9e58712ebef90a9e5bbca9ef5b493600e1bbf5f73a060c4f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclaren, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 06:51:43 compute-1 systemd[1]: var-lib-containers-storage-overlay-2d811fb77bcbdc0e628de9da495e458ac1ae852ea34a71744998b79ed6ecdaf8-merged.mount: Deactivated successfully.
Jan 31 06:51:43 compute-1 podman[85505]: 2026-01-31 06:51:43.40543548 +0000 UTC m=+0.167328115 container remove 0f5f5b8b5ba4fbe9e58712ebef90a9e5bbca9ef5b493600e1bbf5f73a060c4f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclaren, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:51:43 compute-1 systemd[1]: libpod-conmon-0f5f5b8b5ba4fbe9e58712ebef90a9e5bbca9ef5b493600e1bbf5f73a060c4f8.scope: Deactivated successfully.
Jan 31 06:51:43 compute-1 sudo[85465]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:43.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:43 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:51:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:43.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:43 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:43 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:43 compute-1 ceph-mon[81728]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 31 06:51:43 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 06:51:43 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 06:51:43 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:51:43 compute-1 ceph-mon[81728]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 31 06:51:43 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:43 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 06:51:43 compute-1 ceph-mon[81728]: osdmap e73: 3 total, 3 up, 3 in
Jan 31 06:51:43 compute-1 ceph-mon[81728]: 6.9 deep-scrub starts
Jan 31 06:51:43 compute-1 ceph-mon[81728]: 6.9 deep-scrub ok
Jan 31 06:51:43 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:43 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:43 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 06:51:43 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 06:51:43 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:51:44 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Jan 31 06:51:44 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:44 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:44 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:44 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:44 compute-1 sudo[85539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:51:44 compute-1 sudo[85539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:44 compute-1 sudo[85539]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:44 compute-1 sudo[85564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:51:44 compute-1 sudo[85564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:44 compute-1 sudo[85564]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:44 compute-1 sudo[85589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:51:44 compute-1 sudo[85589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:44 compute-1 sudo[85589]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:44 compute-1 sudo[85614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 06:51:44 compute-1 sudo[85614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:45 compute-1 ceph-mon[81728]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 31 06:51:45 compute-1 ceph-mon[81728]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 31 06:51:45 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:45 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:45 compute-1 ceph-mon[81728]: osdmap e74: 3 total, 3 up, 3 in
Jan 31 06:51:45 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:45 compute-1 ceph-mon[81728]: Reconfiguring mgr.compute-2.iujpur (monmap changed)...
Jan 31 06:51:45 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.iujpur", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 06:51:45 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 06:51:45 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:51:45 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 43 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:51:45 compute-1 ceph-mon[81728]: Reconfiguring daemon mgr.compute-2.iujpur on compute-2
Jan 31 06:51:45 compute-1 ceph-mon[81728]: pgmap v185: 321 pgs: 2 remapped+peering, 2 peering, 1 active+clean+laggy, 316 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 31 06:51:45 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:45 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:45 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Jan 31 06:51:45 compute-1 podman[85711]: 2026-01-31 06:51:45.312032586 +0000 UTC m=+0.062760147 container exec 6672f2dbf6180479da9a6d4a3be2a0622c308e279fdc39713e442740f41af4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 06:51:45 compute-1 podman[85711]: 2026-01-31 06:51:45.42124378 +0000 UTC m=+0.171971251 container exec_died 6672f2dbf6180479da9a6d4a3be2a0622c308e279fdc39713e442740f41af4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:51:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:45.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:45 compute-1 sudo[85614]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:45.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:45 compute-1 sudo[85834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:51:45 compute-1 sudo[85834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:45 compute-1 sudo[85834]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:45 compute-1 sudo[85859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:51:45 compute-1 sudo[85859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:45 compute-1 sudo[85859]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:46 compute-1 sudo[85884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:51:46 compute-1 sudo[85884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:46 compute-1 sudo[85884]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:46 compute-1 sudo[85909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 06:51:46 compute-1 sudo[85909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:46 compute-1 ceph-mon[81728]: 7.5 deep-scrub starts
Jan 31 06:51:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:46 compute-1 ceph-mon[81728]: 7.5 deep-scrub ok
Jan 31 06:51:46 compute-1 ceph-mon[81728]: osdmap e75: 3 total, 3 up, 3 in
Jan 31 06:51:46 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:46 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:46 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Jan 31 06:51:46 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 luod=0'0 crt=54'466 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:46 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:46 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 luod=0'0 crt=53'445 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:46 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:46 compute-1 sudo[85909]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:47 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:47 compute-1 ceph-mon[81728]: 6.c scrub starts
Jan 31 06:51:47 compute-1 ceph-mon[81728]: 6.c scrub ok
Jan 31 06:51:47 compute-1 ceph-mon[81728]: pgmap v187: 321 pgs: 2 remapped+peering, 2 peering, 1 active+clean+laggy, 316 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 52 B/s, 2 objects/s recovering
Jan 31 06:51:47 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:47 compute-1 ceph-mon[81728]: osdmap e76: 3 total, 3 up, 3 in
Jan 31 06:51:47 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:47 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:47 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:47 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:51:47 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 06:51:47 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:47 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 06:51:47 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 06:51:47 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:51:47 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:47 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Jan 31 06:51:47 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 77 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:47 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 77 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=76/77 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:47.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:47.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:48 compute-1 ceph-mon[81728]: 6.f deep-scrub starts
Jan 31 06:51:48 compute-1 ceph-mon[81728]: 6.f deep-scrub ok
Jan 31 06:51:48 compute-1 ceph-mon[81728]: osdmap e77: 3 total, 3 up, 3 in
Jan 31 06:51:48 compute-1 ceph-mon[81728]: 4.1 deep-scrub starts
Jan 31 06:51:48 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:48 compute-1 ceph-mon[81728]: 4.1 deep-scrub ok
Jan 31 06:51:48 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:51:49 compute-1 ceph-mon[81728]: 6.10 scrub starts
Jan 31 06:51:49 compute-1 ceph-mon[81728]: 6.10 scrub ok
Jan 31 06:51:49 compute-1 ceph-mon[81728]: pgmap v190: 321 pgs: 2 remapped+peering, 2 peering, 1 active+clean+laggy, 316 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 52 B/s, 3 objects/s recovering
Jan 31 06:51:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:49 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 48 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:51:49 compute-1 sshd-session[85965]: Accepted publickey for zuul from 192.168.122.30 port 43734 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:51:49 compute-1 systemd-logind[788]: New session 33 of user zuul.
Jan 31 06:51:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:49.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:49 compute-1 systemd[1]: Started Session 33 of User zuul.
Jan 31 06:51:49 compute-1 sshd-session[85965]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:51:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:49.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:50 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 31 06:51:50 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 31 06:51:50 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Jan 31 06:51:50 compute-1 ceph-mon[81728]: 6.11 scrub starts
Jan 31 06:51:50 compute-1 ceph-mon[81728]: 6.11 scrub ok
Jan 31 06:51:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:50 compute-1 ceph-mon[81728]: pgmap v191: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 91 B/s, 4 objects/s recovering
Jan 31 06:51:50 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 31 06:51:50 compute-1 python3.9[86118]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:51:51 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 31 06:51:51 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 31 06:51:51 compute-1 ceph-mon[81728]: 6.d scrub starts
Jan 31 06:51:51 compute-1 ceph-mon[81728]: 6.d scrub ok
Jan 31 06:51:51 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 06:51:51 compute-1 ceph-mon[81728]: osdmap e78: 3 total, 3 up, 3 in
Jan 31 06:51:51 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:51.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:51 compute-1 sudo[86330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kitevlashqoqanwpfnhslzdspgigegtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842311.4524853-57-171605405000750/AnsiballZ_command.py'
Jan 31 06:51:51 compute-1 sudo[86330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:51:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:51.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:51 compute-1 python3.9[86332]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:51:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 31 06:51:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 31 06:51:52 compute-1 sudo[86343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:51:52 compute-1 sudo[86343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:52 compute-1 sudo[86343]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:52 compute-1 sudo[86368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:51:52 compute-1 sudo[86368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:51:52 compute-1 sudo[86368]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:53 compute-1 ceph-mon[81728]: 5.1 scrub starts
Jan 31 06:51:53 compute-1 ceph-mon[81728]: 5.1 scrub ok
Jan 31 06:51:53 compute-1 ceph-mon[81728]: 6.16 scrub starts
Jan 31 06:51:53 compute-1 ceph-mon[81728]: 6.16 scrub ok
Jan 31 06:51:53 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:53 compute-1 ceph-mon[81728]: pgmap v193: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 91 B/s, 4 objects/s recovering
Jan 31 06:51:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 31 06:51:53 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 31 06:51:53 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 31 06:51:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:51:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:53.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:51:53 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:51:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:53.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:53 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Jan 31 06:51:54 compute-1 ceph-mon[81728]: 5.f scrub starts
Jan 31 06:51:54 compute-1 ceph-mon[81728]: 5.f scrub ok
Jan 31 06:51:54 compute-1 ceph-mon[81728]: 6.18 scrub starts
Jan 31 06:51:54 compute-1 ceph-mon[81728]: 6.18 scrub ok
Jan 31 06:51:54 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:51:54 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:54 compute-1 ceph-mon[81728]: 6.2 scrub starts
Jan 31 06:51:54 compute-1 ceph-mon[81728]: 6.2 scrub ok
Jan 31 06:51:54 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 31 06:51:54 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 31 06:51:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:55 compute-1 ceph-mon[81728]: 5.e scrub starts
Jan 31 06:51:55 compute-1 ceph-mon[81728]: 5.e scrub ok
Jan 31 06:51:55 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 06:51:55 compute-1 ceph-mon[81728]: osdmap e79: 3 total, 3 up, 3 in
Jan 31 06:51:55 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 53 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:51:55 compute-1 ceph-mon[81728]: 6.3 scrub starts
Jan 31 06:51:55 compute-1 ceph-mon[81728]: 6.3 scrub ok
Jan 31 06:51:55 compute-1 ceph-mon[81728]: pgmap v195: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 78 B/s, 3 objects/s recovering
Jan 31 06:51:55 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 31 06:51:55 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Jan 31 06:51:55 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:55 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:55 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 31 06:51:55 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 31 06:51:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:51:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:55.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:51:55 compute-1 sshd-session[86399]: Invalid user solv from 2.57.122.238 port 55778
Jan 31 06:51:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:55.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:55 compute-1 sshd-session[86399]: Connection closed by invalid user solv 2.57.122.238 port 55778 [preauth]
Jan 31 06:51:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:56 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 06:51:56 compute-1 ceph-mon[81728]: osdmap e80: 3 total, 3 up, 3 in
Jan 31 06:51:56 compute-1 ceph-mon[81728]: 6.5 scrub starts
Jan 31 06:51:56 compute-1 ceph-mon[81728]: 6.5 scrub ok
Jan 31 06:51:56 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Jan 31 06:51:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:56 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:51:57 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:57 compute-1 ceph-mon[81728]: osdmap e81: 3 total, 3 up, 3 in
Jan 31 06:51:57 compute-1 ceph-mon[81728]: pgmap v198: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:51:57 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 31 06:51:57 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Jan 31 06:51:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:57.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:57.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:58 compute-1 ceph-mon[81728]: 3.1a scrub starts
Jan 31 06:51:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:58 compute-1 ceph-mon[81728]: 3.1a scrub ok
Jan 31 06:51:58 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 06:51:58 compute-1 ceph-mon[81728]: osdmap e82: 3 total, 3 up, 3 in
Jan 31 06:51:58 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Jan 31 06:51:58 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 luod=0'0 crt=54'449 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:58 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:58 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 luod=0'0 crt=54'465 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:51:58 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:51:58 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:51:59 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 31 06:51:59 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 31 06:51:59 compute-1 sudo[86330]: pam_unix(sudo:session): session closed for user root
Jan 31 06:51:59 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:51:59 compute-1 ceph-mon[81728]: osdmap e83: 3 total, 3 up, 3 in
Jan 31 06:51:59 compute-1 ceph-mon[81728]: pgmap v201: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:51:59 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 31 06:51:59 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 58 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:51:59 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Jan 31 06:51:59 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:59 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=83/84 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:51:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:51:59.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:51:59 compute-1 sshd-session[85968]: Connection closed by 192.168.122.30 port 43734
Jan 31 06:51:59 compute-1 sshd-session[85965]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:51:59 compute-1 systemd[1]: session-33.scope: Deactivated successfully.
Jan 31 06:51:59 compute-1 systemd[1]: session-33.scope: Consumed 8.232s CPU time.
Jan 31 06:51:59 compute-1 systemd-logind[788]: Session 33 logged out. Waiting for processes to exit.
Jan 31 06:51:59 compute-1 systemd-logind[788]: Removed session 33.
Jan 31 06:51:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:51:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:51:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:51:59.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:00 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:52:00 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:52:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:00 compute-1 ceph-mon[81728]: 5.2 scrub starts
Jan 31 06:52:00 compute-1 ceph-mon[81728]: 5.2 scrub ok
Jan 31 06:52:00 compute-1 ceph-mon[81728]: 6.1d scrub starts
Jan 31 06:52:00 compute-1 ceph-mon[81728]: 6.1d scrub ok
Jan 31 06:52:00 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 06:52:00 compute-1 ceph-mon[81728]: osdmap e84: 3 total, 3 up, 3 in
Jan 31 06:52:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:00 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Jan 31 06:52:00 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:00 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:52:00 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:00 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:52:00 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=0 lpr=85 pi=[51,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:52:01 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 31 06:52:01 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 31 06:52:01 compute-1 ceph-mon[81728]: 7.13 deep-scrub starts
Jan 31 06:52:01 compute-1 ceph-mon[81728]: 7.13 deep-scrub ok
Jan 31 06:52:01 compute-1 ceph-mon[81728]: pgmap v203: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:52:01 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 31 06:52:01 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 31 06:52:01 compute-1 ceph-mon[81728]: osdmap e85: 3 total, 3 up, 3 in
Jan 31 06:52:01 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:01 compute-1 ceph-mon[81728]: 2.1c scrub starts
Jan 31 06:52:01 compute-1 ceph-mon[81728]: 2.1c scrub ok
Jan 31 06:52:01 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Jan 31 06:52:01 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:01 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:52:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:01.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:52:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:01.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:52:02 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Jan 31 06:52:02 compute-1 ceph-mon[81728]: 5.7 scrub starts
Jan 31 06:52:02 compute-1 ceph-mon[81728]: 5.7 scrub ok
Jan 31 06:52:02 compute-1 ceph-mon[81728]: osdmap e86: 3 total, 3 up, 3 in
Jan 31 06:52:02 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:02 compute-1 ceph-mon[81728]: pgmap v206: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:52:02 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 31 06:52:02 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=0 lpr=87 pi=[51,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:52:02 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 luod=0'0 crt=54'454 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:02 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:52:02 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 luod=0'0 crt=54'463 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:02 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:52:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:03.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:03 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Jan 31 06:52:03 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:03 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:52:03 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 luod=0'0 crt=52'436 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:03 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:52:03 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:52:03 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=87/88 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:52:03 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:52:03 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 31 06:52:03 compute-1 ceph-mon[81728]: osdmap e87: 3 total, 3 up, 3 in
Jan 31 06:52:03 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:03.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:04 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Jan 31 06:52:04 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=0 lpr=89 pi=[51,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:52:04 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=88/89 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:52:04 compute-1 ceph-mon[81728]: 2.19 scrub starts
Jan 31 06:52:04 compute-1 ceph-mon[81728]: 2.19 scrub ok
Jan 31 06:52:04 compute-1 ceph-mon[81728]: osdmap e88: 3 total, 3 up, 3 in
Jan 31 06:52:04 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:04 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 63 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:52:04 compute-1 ceph-mon[81728]: 2.e scrub starts
Jan 31 06:52:04 compute-1 ceph-mon[81728]: 2.e scrub ok
Jan 31 06:52:04 compute-1 ceph-mon[81728]: pgmap v209: 321 pgs: 1 active+remapped, 1 active+clean+laggy, 319 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 31 06:52:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 31 06:52:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 31 06:52:04 compute-1 ceph-mon[81728]: osdmap e89: 3 total, 3 up, 3 in
Jan 31 06:52:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:05.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:05 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Jan 31 06:52:05 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 luod=0'0 crt=54'454 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:05 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:52:05 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:05 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:52:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:05.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:05 compute-1 ceph-mon[81728]: 3.1d scrub starts
Jan 31 06:52:05 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:05 compute-1 ceph-mon[81728]: 3.1d scrub ok
Jan 31 06:52:05 compute-1 ceph-mon[81728]: osdmap e90: 3 total, 3 up, 3 in
Jan 31 06:52:06 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 31 06:52:06 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 31 06:52:06 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Jan 31 06:52:06 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 91 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=90/91 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:52:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:06 compute-1 ceph-mon[81728]: 4.5 scrub starts
Jan 31 06:52:06 compute-1 ceph-mon[81728]: 4.5 scrub ok
Jan 31 06:52:06 compute-1 ceph-mon[81728]: pgmap v212: 321 pgs: 1 active+remapped, 1 active+clean+laggy, 319 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 31 06:52:06 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 31 06:52:06 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 31 06:52:06 compute-1 ceph-mon[81728]: osdmap e91: 3 total, 3 up, 3 in
Jan 31 06:52:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:07.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:07 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Jan 31 06:52:07 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 luod=0'0 crt=53'447 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:07 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:52:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:52:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:07.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:52:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:07 compute-1 ceph-mon[81728]: 2.1d scrub starts
Jan 31 06:52:07 compute-1 ceph-mon[81728]: 2.1d scrub ok
Jan 31 06:52:07 compute-1 ceph-mon[81728]: osdmap e92: 3 total, 3 up, 3 in
Jan 31 06:52:08 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:52:08 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Jan 31 06:52:08 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 93 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=92/93 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:52:08 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:08 compute-1 ceph-mon[81728]: pgmap v215: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 4 objects/s recovering
Jan 31 06:52:08 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 31 06:52:08 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 31 06:52:08 compute-1 ceph-mon[81728]: osdmap e93: 3 total, 3 up, 3 in
Jan 31 06:52:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:09.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:09.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:10 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:10 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 68 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:52:11 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Jan 31 06:52:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:11.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:11 compute-1 ceph-mon[81728]: 5.1a scrub starts
Jan 31 06:52:11 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:11 compute-1 ceph-mon[81728]: 5.1a scrub ok
Jan 31 06:52:11 compute-1 ceph-mon[81728]: 7.b scrub starts
Jan 31 06:52:11 compute-1 ceph-mon[81728]: 7.b scrub ok
Jan 31 06:52:11 compute-1 ceph-mon[81728]: pgmap v217: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 73 B/s, 3 objects/s recovering
Jan 31 06:52:11 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 31 06:52:11 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=94) [1] r=0 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:52:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:52:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:11.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:52:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:12 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 31 06:52:12 compute-1 ceph-mon[81728]: osdmap e94: 3 total, 3 up, 3 in
Jan 31 06:52:12 compute-1 ceph-mon[81728]: 7.14 scrub starts
Jan 31 06:52:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:12 compute-1 ceph-mon[81728]: 7.14 scrub ok
Jan 31 06:52:12 compute-1 ceph-mon[81728]: pgmap v219: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 60 B/s, 3 objects/s recovering
Jan 31 06:52:12 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 31 06:52:12 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Jan 31 06:52:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 06:52:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=95 pruub=11.106972694s) [2] r=-1 lpr=95 pi=[68,95)/1 crt=52'438 mlcod 0'0 active pruub 194.215652466s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:12 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=95 pruub=11.106878281s) [2] r=-1 lpr=95 pi=[68,95)/1 crt=52'438 mlcod 0'0 unknown NOTIFY pruub 194.215652466s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:52:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:13.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:13 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 31 06:52:13 compute-1 ceph-mon[81728]: osdmap e95: 3 total, 3 up, 3 in
Jan 31 06:52:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:13 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Jan 31 06:52:13 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:13 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 06:52:13 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:52:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:13.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:14 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Jan 31 06:52:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 luod=0'0 crt=54'444 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 06:52:14 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:52:14 compute-1 ceph-mon[81728]: osdmap e96: 3 total, 3 up, 3 in
Jan 31 06:52:14 compute-1 ceph-mon[81728]: 5.b deep-scrub starts
Jan 31 06:52:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:14 compute-1 ceph-mon[81728]: 5.b deep-scrub ok
Jan 31 06:52:14 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 73 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:52:14 compute-1 ceph-mon[81728]: osdmap e97: 3 total, 3 up, 3 in
Jan 31 06:52:14 compute-1 ceph-mon[81728]: 7.8 scrub starts
Jan 31 06:52:14 compute-1 ceph-mon[81728]: 7.8 scrub ok
Jan 31 06:52:14 compute-1 ceph-mon[81728]: pgmap v223: 321 pgs: 1 active+clean+laggy, 2 unknown, 318 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:52:14 compute-1 sshd-session[86441]: Accepted publickey for zuul from 192.168.122.30 port 33086 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:52:15 compute-1 systemd-logind[788]: New session 34 of user zuul.
Jan 31 06:52:15 compute-1 systemd[1]: Started Session 34 of User zuul.
Jan 31 06:52:15 compute-1 sshd-session[86441]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:52:15 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Jan 31 06:52:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=98 pruub=15.002883911s) [2] async=[2] r=-1 lpr=98 pi=[68,98)/1 crt=52'438 mlcod 52'438 active pruub 200.538467407s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=98 pruub=15.002782822s) [2] r=-1 lpr=98 pi=[68,98)/1 crt=52'438 mlcod 0'0 unknown NOTIFY pruub 200.538467407s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:52:15 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=97/98 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:52:15 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Jan 31 06:52:15 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Jan 31 06:52:15 compute-1 python3.9[86594]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 06:52:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:15.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:52:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:15.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:52:16 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:16 compute-1 ceph-mon[81728]: 2.d scrub starts
Jan 31 06:52:16 compute-1 ceph-mon[81728]: 2.d scrub ok
Jan 31 06:52:16 compute-1 ceph-mon[81728]: osdmap e98: 3 total, 3 up, 3 in
Jan 31 06:52:16 compute-1 ceph-mon[81728]: 4.a deep-scrub starts
Jan 31 06:52:16 compute-1 ceph-mon[81728]: 4.a deep-scrub ok
Jan 31 06:52:16 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Jan 31 06:52:16 compute-1 python3.9[86768]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:52:17 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:17 compute-1 ceph-mon[81728]: 3.0 scrub starts
Jan 31 06:52:17 compute-1 ceph-mon[81728]: 3.0 scrub ok
Jan 31 06:52:17 compute-1 ceph-mon[81728]: 7.9 scrub starts
Jan 31 06:52:17 compute-1 ceph-mon[81728]: osdmap e99: 3 total, 3 up, 3 in
Jan 31 06:52:17 compute-1 ceph-mon[81728]: 7.9 scrub ok
Jan 31 06:52:17 compute-1 ceph-mon[81728]: pgmap v226: 321 pgs: 1 active+clean+laggy, 2 unknown, 318 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:52:17 compute-1 sudo[86922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqtoewghlymytfvgichkttahbqehqpqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842337.1914942-94-131012706260666/AnsiballZ_command.py'
Jan 31 06:52:17 compute-1 sudo[86922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:52:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:17.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:17 compute-1 python3.9[86924]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:52:17 compute-1 sudo[86922]: pam_unix(sudo:session): session closed for user root
Jan 31 06:52:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:17.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:18 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:18 compute-1 sudo[87075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imcsxpczgnthvbquzaagdotxlgaredpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842338.1775677-130-262602459273129/AnsiballZ_stat.py'
Jan 31 06:52:18 compute-1 sudo[87075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:52:18 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:52:18 compute-1 python3.9[87077]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:52:18 compute-1 sudo[87075]: pam_unix(sudo:session): session closed for user root
Jan 31 06:52:19 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:19 compute-1 ceph-mon[81728]: pgmap v227: 321 pgs: 1 active+clean+laggy, 1 unknown, 319 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 225 B/s wr, 14 op/s; 48 B/s, 1 objects/s recovering
Jan 31 06:52:19 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 78 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:52:19 compute-1 sudo[87229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwkyjbkgshhpxphposbjixrbvdxajesi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842339.0985072-163-134345366409596/AnsiballZ_file.py'
Jan 31 06:52:19 compute-1 sudo[87229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:52:19 compute-1 python3.9[87231]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:52:19 compute-1 sudo[87229]: pam_unix(sudo:session): session closed for user root
Jan 31 06:52:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:19.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:19.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:20 compute-1 sudo[87381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzsklaxtjeuqrliovatdbqtksrqmkbsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842339.9063573-190-116109754369080/AnsiballZ_file.py'
Jan 31 06:52:20 compute-1 sudo[87381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:52:20 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 31 06:52:20 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 31 06:52:20 compute-1 python3.9[87383]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:52:20 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:20 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:20 compute-1 sudo[87381]: pam_unix(sudo:session): session closed for user root
Jan 31 06:52:21 compute-1 python3.9[87533]: ansible-ansible.builtin.service_facts Invoked
Jan 31 06:52:21 compute-1 network[87550]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 06:52:21 compute-1 network[87551]: 'network-scripts' will be removed from distribution in near future.
Jan 31 06:52:21 compute-1 network[87552]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 06:52:21 compute-1 ceph-mon[81728]: 6.7 scrub starts
Jan 31 06:52:21 compute-1 ceph-mon[81728]: 6.7 scrub ok
Jan 31 06:52:21 compute-1 ceph-mon[81728]: pgmap v228: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 331 B/s wr, 18 op/s; 71 B/s, 2 objects/s recovering
Jan 31 06:52:21 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 31 06:52:21 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:21 compute-1 ceph-mon[81728]: 2.c deep-scrub starts
Jan 31 06:52:21 compute-1 ceph-mon[81728]: 2.c deep-scrub ok
Jan 31 06:52:21 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Jan 31 06:52:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:21.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:52:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:21.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:52:22 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 31 06:52:22 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 31 06:52:22 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Jan 31 06:52:22 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 31 06:52:22 compute-1 ceph-mon[81728]: osdmap e100: 3 total, 3 up, 3 in
Jan 31 06:52:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:22 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 31 06:52:23 compute-1 ceph-mon[81728]: 7.e scrub starts
Jan 31 06:52:23 compute-1 ceph-mon[81728]: 7.e scrub ok
Jan 31 06:52:23 compute-1 ceph-mon[81728]: pgmap v230: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 8.7 KiB/s rd, 285 B/s wr, 16 op/s; 61 B/s, 2 objects/s recovering
Jan 31 06:52:23 compute-1 ceph-mon[81728]: 4.d scrub starts
Jan 31 06:52:23 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 31 06:52:23 compute-1 ceph-mon[81728]: 4.d scrub ok
Jan 31 06:52:23 compute-1 ceph-mon[81728]: osdmap e101: 3 total, 3 up, 3 in
Jan 31 06:52:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:23.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:23 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:52:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:23.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:24 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Jan 31 06:52:24 compute-1 python3.9[87812]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:52:24 compute-1 ceph-mon[81728]: 7.4 deep-scrub starts
Jan 31 06:52:24 compute-1 ceph-mon[81728]: 7.4 deep-scrub ok
Jan 31 06:52:24 compute-1 ceph-mon[81728]: 2.a scrub starts
Jan 31 06:52:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:24 compute-1 ceph-mon[81728]: 2.a scrub ok
Jan 31 06:52:24 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 83 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:52:24 compute-1 ceph-mon[81728]: pgmap v232: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 255 B/s wr, 13 op/s; 54 B/s, 1 objects/s recovering
Jan 31 06:52:24 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 31 06:52:25 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 31 06:52:25 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 31 06:52:25 compute-1 python3.9[87962]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:52:25 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Jan 31 06:52:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:25.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:25.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:26 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 31 06:52:26 compute-1 ceph-mon[81728]: osdmap e102: 3 total, 3 up, 3 in
Jan 31 06:52:26 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:26 compute-1 ceph-mon[81728]: 6.8 scrub starts
Jan 31 06:52:26 compute-1 ceph-mon[81728]: 6.8 scrub ok
Jan 31 06:52:26 compute-1 ceph-mon[81728]: osdmap e103: 3 total, 3 up, 3 in
Jan 31 06:52:26 compute-1 python3.9[88116]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:52:27 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Jan 31 06:52:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 104 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104 pruub=15.973120689s) [0] r=-1 lpr=104 pi=[76,104)/1 crt=53'445 mlcod 0'0 active pruub 213.712402344s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:27 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 104 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104 pruub=15.972820282s) [0] r=-1 lpr=104 pi=[76,104)/1 crt=53'445 mlcod 0'0 unknown NOTIFY pruub 213.712402344s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:52:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:27 compute-1 ceph-mon[81728]: pgmap v235: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:52:27 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 31 06:52:27 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 31 06:52:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:27.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:27 compute-1 sudo[88272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzzgqkrvmvnfkbfcclcxqigriubvngzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842347.4059398-334-69967520814883/AnsiballZ_setup.py'
Jan 31 06:52:27 compute-1 sudo[88272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:52:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:27.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:28 compute-1 python3.9[88274]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:52:28 compute-1 sudo[88272]: pam_unix(sudo:session): session closed for user root
Jan 31 06:52:28 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Jan 31 06:52:28 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 105 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:28 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 105 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 06:52:28 compute-1 sudo[88356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rherikufznegdldivarxsijchbbmqspj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842347.4059398-334-69967520814883/AnsiballZ_dnf.py'
Jan 31 06:52:28 compute-1 sudo[88356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:52:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:28 compute-1 ceph-mon[81728]: 4.2 deep-scrub starts
Jan 31 06:52:28 compute-1 ceph-mon[81728]: 4.2 deep-scrub ok
Jan 31 06:52:28 compute-1 ceph-mon[81728]: osdmap e104: 3 total, 3 up, 3 in
Jan 31 06:52:28 compute-1 ceph-mon[81728]: 7.2 scrub starts
Jan 31 06:52:28 compute-1 ceph-mon[81728]: 7.2 scrub ok
Jan 31 06:52:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:28 compute-1 ceph-mon[81728]: 4.19 scrub starts
Jan 31 06:52:28 compute-1 ceph-mon[81728]: 4.19 scrub ok
Jan 31 06:52:28 compute-1 ceph-mon[81728]: pgmap v237: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:52:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 31 06:52:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 31 06:52:28 compute-1 ceph-mon[81728]: osdmap e105: 3 total, 3 up, 3 in
Jan 31 06:52:28 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:52:28 compute-1 python3.9[88358]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:52:29 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Jan 31 06:52:29 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 106 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:52:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:52:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:29.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:52:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:29.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:30 compute-1 ceph-mon[81728]: 2.9 scrub starts
Jan 31 06:52:30 compute-1 ceph-mon[81728]: 2.9 scrub ok
Jan 31 06:52:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:30 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 88 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:52:30 compute-1 ceph-mon[81728]: osdmap e106: 3 total, 3 up, 3 in
Jan 31 06:52:30 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Jan 31 06:52:30 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107 pruub=14.542356491s) [0] async=[0] r=-1 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 53'445 active pruub 215.535385132s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:30 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107 pruub=14.541945457s) [0] r=-1 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown NOTIFY pruub 215.535385132s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:52:30 compute-1 ceph-mon[81728]: 7.f scrub starts
Jan 31 06:52:30 compute-1 ceph-mon[81728]: 7.f scrub ok
Jan 31 06:52:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:30 compute-1 ceph-mon[81728]: pgmap v240: 321 pgs: 1 active+clean+laggy, 1 unknown, 319 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:52:30 compute-1 ceph-mon[81728]: osdmap e107: 3 total, 3 up, 3 in
Jan 31 06:52:31 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Jan 31 06:52:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:31.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:31.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:32 compute-1 ceph-mon[81728]: osdmap e108: 3 total, 3 up, 3 in
Jan 31 06:52:32 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Jan 31 06:52:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:33 compute-1 ceph-mon[81728]: 2.13 deep-scrub starts
Jan 31 06:52:33 compute-1 ceph-mon[81728]: 2.13 deep-scrub ok
Jan 31 06:52:33 compute-1 ceph-mon[81728]: 7.18 deep-scrub starts
Jan 31 06:52:33 compute-1 ceph-mon[81728]: pgmap v243: 321 pgs: 1 active+clean+laggy, 1 unknown, 319 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:52:33 compute-1 ceph-mon[81728]: 7.18 deep-scrub ok
Jan 31 06:52:33 compute-1 ceph-mon[81728]: osdmap e109: 3 total, 3 up, 3 in
Jan 31 06:52:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:33.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:33 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:52:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:52:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:33.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:52:34 compute-1 ceph-mon[81728]: 6.0 scrub starts
Jan 31 06:52:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:34 compute-1 ceph-mon[81728]: 6.0 scrub ok
Jan 31 06:52:34 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 31 06:52:34 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 31 06:52:35 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Jan 31 06:52:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:35 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 93 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:52:35 compute-1 ceph-mon[81728]: 7.1b scrub starts
Jan 31 06:52:35 compute-1 ceph-mon[81728]: 7.1b scrub ok
Jan 31 06:52:35 compute-1 ceph-mon[81728]: pgmap v245: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 7.5 KiB/s rd, 198 B/s wr, 13 op/s; 85 B/s, 3 objects/s recovering
Jan 31 06:52:35 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 31 06:52:35 compute-1 ceph-mon[81728]: 3.d scrub starts
Jan 31 06:52:35 compute-1 ceph-mon[81728]: 3.d scrub ok
Jan 31 06:52:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:35.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:35.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:36 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 31 06:52:36 compute-1 ceph-mon[81728]: osdmap e110: 3 total, 3 up, 3 in
Jan 31 06:52:37 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Jan 31 06:52:37 compute-1 ceph-mon[81728]: 5.13 scrub starts
Jan 31 06:52:37 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:37 compute-1 ceph-mon[81728]: 5.13 scrub ok
Jan 31 06:52:37 compute-1 ceph-mon[81728]: pgmap v247: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6.5 KiB/s rd, 170 B/s wr, 11 op/s; 73 B/s, 2 objects/s recovering
Jan 31 06:52:37 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 31 06:52:37 compute-1 ceph-mon[81728]: 2.1e scrub starts
Jan 31 06:52:37 compute-1 ceph-mon[81728]: 2.1e scrub ok
Jan 31 06:52:37 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 111 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=111 pruub=10.240897179s) [2] r=-1 lpr=111 pi=[83,111)/1 crt=54'449 mlcod 0'0 active pruub 217.942642212s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:37 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 111 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=111 pruub=10.240834236s) [2] r=-1 lpr=111 pi=[83,111)/1 crt=54'449 mlcod 0'0 unknown NOTIFY pruub 217.942642212s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:52:37 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 31 06:52:37 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 31 06:52:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:37.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:37.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:38 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:38 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 31 06:52:38 compute-1 ceph-mon[81728]: osdmap e111: 3 total, 3 up, 3 in
Jan 31 06:52:38 compute-1 ceph-mon[81728]: 3.c scrub starts
Jan 31 06:52:38 compute-1 ceph-mon[81728]: 3.c scrub ok
Jan 31 06:52:38 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Jan 31 06:52:38 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 112 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:38 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 112 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 06:52:38 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:52:39 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:39 compute-1 ceph-mon[81728]: pgmap v250: 321 pgs: 1 active+clean+scrubbing, 1 active+clean+laggy, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6.6 KiB/s rd, 172 B/s wr, 11 op/s; 74 B/s, 2 objects/s recovering
Jan 31 06:52:39 compute-1 ceph-mon[81728]: osdmap e112: 3 total, 3 up, 3 in
Jan 31 06:52:39 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 31 06:52:39 compute-1 ceph-mon[81728]: 7.3 scrub starts
Jan 31 06:52:39 compute-1 ceph-mon[81728]: 7.3 scrub ok
Jan 31 06:52:39 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 98 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:52:39 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Jan 31 06:52:39 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113 pruub=8.165588379s) [0] r=-1 lpr=113 pi=[68,113)/1 crt=54'458 mlcod 0'0 active pruub 218.213668823s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:39 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113 pruub=8.165482521s) [0] r=-1 lpr=113 pi=[68,113)/1 crt=54'458 mlcod 0'0 unknown NOTIFY pruub 218.213668823s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:52:39 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:52:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:39.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:39.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:40 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 31 06:52:40 compute-1 ceph-mon[81728]: osdmap e113: 3 total, 3 up, 3 in
Jan 31 06:52:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:40 compute-1 ceph-mon[81728]: pgmap v252: 321 pgs: 1 active+clean+laggy, 1 active+clean+scrubbing, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:52:40 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 06:52:40 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Jan 31 06:52:40 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:40 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 06:52:40 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=114 pruub=14.716454506s) [2] async=[2] r=-1 lpr=114 pi=[83,114)/1 crt=54'449 mlcod 54'449 active pruub 226.095108032s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:40 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114 pruub=10.700444221s) [0] r=-1 lpr=114 pi=[87,114)/1 crt=54'454 mlcod 0'0 active pruub 222.079086304s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:40 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114 pruub=10.700316429s) [0] r=-1 lpr=114 pi=[87,114)/1 crt=54'454 mlcod 0'0 unknown NOTIFY pruub 222.079086304s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:52:40 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=114 pruub=14.716309547s) [2] r=-1 lpr=114 pi=[83,114)/1 crt=54'449 mlcod 0'0 unknown NOTIFY pruub 226.095108032s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:52:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:41.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:41.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:41 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Jan 31 06:52:41 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:41 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 06:52:41 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:52:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:42 compute-1 ceph-mon[81728]: 3.1b scrub starts
Jan 31 06:52:42 compute-1 ceph-mon[81728]: 3.1b scrub ok
Jan 31 06:52:42 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 06:52:42 compute-1 ceph-mon[81728]: osdmap e114: 3 total, 3 up, 3 in
Jan 31 06:52:42 compute-1 ceph-mon[81728]: 2.1f scrub starts
Jan 31 06:52:42 compute-1 ceph-mon[81728]: 2.1f scrub ok
Jan 31 06:52:42 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 31 06:52:42 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 31 06:52:43 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Jan 31 06:52:43 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116 pruub=14.884787560s) [0] async=[0] r=-1 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 54'458 active pruub 228.372085571s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:43 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116 pruub=14.884650230s) [0] r=-1 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown NOTIFY pruub 228.372085571s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:52:43 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 06:52:43 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:43 compute-1 ceph-mon[81728]: osdmap e115: 3 total, 3 up, 3 in
Jan 31 06:52:43 compute-1 ceph-mon[81728]: pgmap v255: 321 pgs: 1 active+clean+laggy, 1 active+clean+scrubbing, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:52:43 compute-1 ceph-mon[81728]: 5.9 scrub starts
Jan 31 06:52:43 compute-1 ceph-mon[81728]: 5.9 scrub ok
Jan 31 06:52:43 compute-1 ceph-mon[81728]: osdmap e116: 3 total, 3 up, 3 in
Jan 31 06:52:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:43.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:43 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:52:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:43.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:44 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Jan 31 06:52:44 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117 pruub=15.150360107s) [0] async=[0] r=-1 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 54'454 active pruub 229.562225342s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 06:52:44 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117 pruub=15.150243759s) [0] r=-1 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown NOTIFY pruub 229.562225342s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 06:52:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:44 compute-1 ceph-mon[81728]: osdmap e117: 3 total, 3 up, 3 in
Jan 31 06:52:44 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:52:45 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Jan 31 06:52:45 compute-1 ceph-mon[81728]: pgmap v258: 321 pgs: 1 peering, 1 active+clean+laggy, 1 remapped+peering, 318 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 31 06:52:45 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:45 compute-1 ceph-mon[81728]: osdmap e118: 3 total, 3 up, 3 in
Jan 31 06:52:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:45.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:45.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:47 compute-1 ceph-mon[81728]: 8.1 scrub starts
Jan 31 06:52:47 compute-1 ceph-mon[81728]: 8.1 scrub ok
Jan 31 06:52:47 compute-1 ceph-mon[81728]: 4.1c deep-scrub starts
Jan 31 06:52:47 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:47 compute-1 ceph-mon[81728]: 4.1c deep-scrub ok
Jan 31 06:52:47 compute-1 ceph-mon[81728]: pgmap v260: 321 pgs: 1 peering, 1 active+clean+laggy, 1 remapped+peering, 318 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 50 B/s, 2 objects/s recovering
Jan 31 06:52:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:47.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:47.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:48 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:48 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:48 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.10 deep-scrub starts
Jan 31 06:52:48 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.10 deep-scrub ok
Jan 31 06:52:48 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:52:49 compute-1 ceph-mon[81728]: pgmap v261: 321 pgs: 1 peering, 1 active+clean+laggy, 1 remapped+peering, 318 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 1 objects/s recovering
Jan 31 06:52:49 compute-1 ceph-mon[81728]: 3.10 deep-scrub starts
Jan 31 06:52:49 compute-1 ceph-mon[81728]: 3.10 deep-scrub ok
Jan 31 06:52:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:49 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:52:49 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 31 06:52:49 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 31 06:52:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:49.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:49.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:50 compute-1 ceph-mon[81728]: 5.16 scrub starts
Jan 31 06:52:50 compute-1 ceph-mon[81728]: 5.16 scrub ok
Jan 31 06:52:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:51 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 31 06:52:51 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 31 06:52:51 compute-1 ceph-mon[81728]: pgmap v262: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 45 B/s, 1 objects/s recovering
Jan 31 06:52:51 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:51.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:52:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:51.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:52:52 compute-1 ceph-mon[81728]: 3.f scrub starts
Jan 31 06:52:52 compute-1 ceph-mon[81728]: 3.f scrub ok
Jan 31 06:52:52 compute-1 ceph-mon[81728]: 6.14 scrub starts
Jan 31 06:52:52 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:52 compute-1 ceph-mon[81728]: 6.14 scrub ok
Jan 31 06:52:52 compute-1 ceph-mon[81728]: pgmap v263: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Jan 31 06:52:52 compute-1 sudo[88475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:52:52 compute-1 sudo[88475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:52:52 compute-1 sudo[88475]: pam_unix(sudo:session): session closed for user root
Jan 31 06:52:52 compute-1 sudo[88500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:52:52 compute-1 sudo[88500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:52:52 compute-1 sudo[88500]: pam_unix(sudo:session): session closed for user root
Jan 31 06:52:52 compute-1 sudo[88525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:52:52 compute-1 sudo[88525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:52:53 compute-1 sudo[88525]: pam_unix(sudo:session): session closed for user root
Jan 31 06:52:53 compute-1 sudo[88550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 06:52:53 compute-1 sudo[88550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:52:53 compute-1 podman[88649]: 2026-01-31 06:52:53.556153678 +0000 UTC m=+0.105731719 container exec 6672f2dbf6180479da9a6d4a3be2a0622c308e279fdc39713e442740f41af4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 06:52:53 compute-1 ceph-mon[81728]: 8.7 scrub starts
Jan 31 06:52:53 compute-1 ceph-mon[81728]: 8.7 scrub ok
Jan 31 06:52:53 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:53 compute-1 podman[88649]: 2026-01-31 06:52:53.641134899 +0000 UTC m=+0.190712910 container exec_died 6672f2dbf6180479da9a6d4a3be2a0622c308e279fdc39713e442740f41af4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 06:52:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:53.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:53 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:52:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:52:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:53.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:52:54 compute-1 sudo[88550]: pam_unix(sudo:session): session closed for user root
Jan 31 06:52:54 compute-1 sudo[88772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:52:54 compute-1 sudo[88772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:52:54 compute-1 sudo[88772]: pam_unix(sudo:session): session closed for user root
Jan 31 06:52:54 compute-1 sudo[88797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:52:54 compute-1 sudo[88797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:52:54 compute-1 sudo[88797]: pam_unix(sudo:session): session closed for user root
Jan 31 06:52:54 compute-1 sudo[88822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:52:54 compute-1 sudo[88822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:52:54 compute-1 sudo[88822]: pam_unix(sudo:session): session closed for user root
Jan 31 06:52:54 compute-1 sudo[88847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 06:52:54 compute-1 sudo[88847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:52:54 compute-1 ceph-mon[81728]: 8.e deep-scrub starts
Jan 31 06:52:54 compute-1 ceph-mon[81728]: 8.e deep-scrub ok
Jan 31 06:52:54 compute-1 ceph-mon[81728]: 5.d scrub starts
Jan 31 06:52:54 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:54 compute-1 ceph-mon[81728]: 5.d scrub ok
Jan 31 06:52:54 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:52:54 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:52:54 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:52:54 compute-1 ceph-mon[81728]: pgmap v264: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Jan 31 06:52:54 compute-1 sudo[88847]: pam_unix(sudo:session): session closed for user root
Jan 31 06:52:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:55.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:55.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:55 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:52:55 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:52:55 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:52:55 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:52:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:55 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:52:55 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 06:52:55 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:52:55 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 06:52:55 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 06:52:55 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:52:57 compute-1 ceph-mon[81728]: 2.10 scrub starts
Jan 31 06:52:57 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:57 compute-1 ceph-mon[81728]: 2.10 scrub ok
Jan 31 06:52:57 compute-1 ceph-mon[81728]: pgmap v265: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Jan 31 06:52:57 compute-1 ceph-mon[81728]: 8.13 scrub starts
Jan 31 06:52:57 compute-1 ceph-mon[81728]: 8.13 scrub ok
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:52:57.033599) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842377033665, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7131, "num_deletes": 255, "total_data_size": 13234417, "memory_usage": 13429120, "flush_reason": "Manual Compaction"}
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842377215722, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 7979596, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 246, "largest_seqno": 7136, "table_properties": {"data_size": 7950960, "index_size": 18601, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8965, "raw_key_size": 84798, "raw_average_key_size": 23, "raw_value_size": 7881991, "raw_average_value_size": 2212, "num_data_blocks": 821, "num_entries": 3563, "num_filter_entries": 3563, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842193, "oldest_key_time": 1769842193, "file_creation_time": 1769842377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 182208 microseconds, and 14727 cpu microseconds.
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:52:57.215812) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 7979596 bytes OK
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:52:57.215836) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:52:57.362525) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:52:57.362600) EVENT_LOG_v1 {"time_micros": 1769842377362586, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:52:57.362630) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13195921, prev total WAL file size 13196185, number of live WAL files 2.
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:52:57.364293) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(7792KB) 8(1648B)]
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842377364370, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 7981244, "oldest_snapshot_seqno": -1}
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3312 keys, 7976045 bytes, temperature: kUnknown
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842377493167, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 7976045, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7948104, "index_size": 18551, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 80620, "raw_average_key_size": 24, "raw_value_size": 7882271, "raw_average_value_size": 2379, "num_data_blocks": 820, "num_entries": 3312, "num_filter_entries": 3312, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842193, "oldest_key_time": 0, "file_creation_time": 1769842377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:52:57.493442) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 7976045 bytes
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:52:57.578553) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 61.9 rd, 61.9 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(7.6, 0.0 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3568, records dropped: 256 output_compression: NoCompression
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:52:57.578625) EVENT_LOG_v1 {"time_micros": 1769842377578598, "job": 4, "event": "compaction_finished", "compaction_time_micros": 128892, "compaction_time_cpu_micros": 15136, "output_level": 6, "num_output_files": 1, "total_output_size": 7976045, "num_input_records": 3568, "num_output_records": 3312, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842377579612, "job": 4, "event": "table_file_deletion", "file_number": 14}
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842377579652, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 31 06:52:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:52:57.364201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:52:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:57.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:57.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:57 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:57 compute-1 ceph-mon[81728]: 8.1a scrub starts
Jan 31 06:52:57 compute-1 ceph-mon[81728]: 8.1a scrub ok
Jan 31 06:52:58 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:52:59 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:52:59 compute-1 ceph-mon[81728]: pgmap v266: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Jan 31 06:52:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:52:59.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:52:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:52:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:52:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:52:59.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:00 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:53:01 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Jan 31 06:53:01 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Jan 31 06:53:01 compute-1 ceph-mon[81728]: pgmap v267: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Jan 31 06:53:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:01.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:01 compute-1 sudo[88934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:53:01 compute-1 sudo[88934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:53:01 compute-1 sudo[88934]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:01 compute-1 sudo[88959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:53:01 compute-1 sudo[88959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:53:01 compute-1 sudo[88959]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:01.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:02 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 31 06:53:02 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 31 06:53:02 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:02 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:02 compute-1 ceph-mon[81728]: 6.15 scrub starts
Jan 31 06:53:02 compute-1 ceph-mon[81728]: 6.15 scrub ok
Jan 31 06:53:02 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:53:02 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:53:02 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:02 compute-1 ceph-mon[81728]: 5.8 deep-scrub starts
Jan 31 06:53:02 compute-1 ceph-mon[81728]: 5.8 deep-scrub ok
Jan 31 06:53:02 compute-1 ceph-mon[81728]: pgmap v268: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:03 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 31 06:53:03 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 31 06:53:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:03.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:03 compute-1 ceph-mon[81728]: 8.1d scrub starts
Jan 31 06:53:03 compute-1 ceph-mon[81728]: 8.1d scrub ok
Jan 31 06:53:03 compute-1 ceph-mon[81728]: 5.15 scrub starts
Jan 31 06:53:03 compute-1 ceph-mon[81728]: 5.15 scrub ok
Jan 31 06:53:03 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:03 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:53:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:03.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:04 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 31 06:53:04 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 31 06:53:05 compute-1 ceph-mon[81728]: 8.1e scrub starts
Jan 31 06:53:05 compute-1 ceph-mon[81728]: 8.1e scrub ok
Jan 31 06:53:05 compute-1 ceph-mon[81728]: 3.13 scrub starts
Jan 31 06:53:05 compute-1 ceph-mon[81728]: 3.13 scrub ok
Jan 31 06:53:05 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:05 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:53:05 compute-1 ceph-mon[81728]: pgmap v269: 321 pgs: 1 active+clean+scrubbing, 1 active+clean+laggy, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:05 compute-1 ceph-mon[81728]: 9.1 deep-scrub starts
Jan 31 06:53:05 compute-1 ceph-mon[81728]: 9.1 deep-scrub ok
Jan 31 06:53:05 compute-1 ceph-mon[81728]: 3.14 scrub starts
Jan 31 06:53:05 compute-1 ceph-mon[81728]: 3.14 scrub ok
Jan 31 06:53:05 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 31 06:53:05 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 31 06:53:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:05.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:05.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:06 compute-1 ceph-mon[81728]: 6.b scrub starts
Jan 31 06:53:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:06 compute-1 ceph-mon[81728]: 6.b scrub ok
Jan 31 06:53:06 compute-1 ceph-mon[81728]: 9.2 deep-scrub starts
Jan 31 06:53:06 compute-1 ceph-mon[81728]: 9.2 deep-scrub ok
Jan 31 06:53:06 compute-1 ceph-mon[81728]: 4.13 scrub starts
Jan 31 06:53:06 compute-1 ceph-mon[81728]: 4.13 scrub ok
Jan 31 06:53:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:07 compute-1 ceph-mon[81728]: pgmap v270: 321 pgs: 1 active+clean+scrubbing, 1 active+clean+laggy, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:07.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:07.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:08 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 31 06:53:08 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 31 06:53:08 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:08 compute-1 ceph-mon[81728]: 9.4 scrub starts
Jan 31 06:53:08 compute-1 ceph-mon[81728]: 9.4 scrub ok
Jan 31 06:53:08 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:08 compute-1 ceph-mon[81728]: 4.6 scrub starts
Jan 31 06:53:08 compute-1 ceph-mon[81728]: 4.6 scrub ok
Jan 31 06:53:08 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:53:09 compute-1 ceph-mon[81728]: pgmap v271: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:09 compute-1 ceph-mon[81728]: 5.11 scrub starts
Jan 31 06:53:09 compute-1 ceph-mon[81728]: 5.11 scrub ok
Jan 31 06:53:09 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:09 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:53:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:09.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:53:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:09.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:53:10 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 31 06:53:10 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 31 06:53:10 compute-1 ceph-mon[81728]: 7.6 scrub starts
Jan 31 06:53:10 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:10 compute-1 ceph-mon[81728]: 7.6 scrub ok
Jan 31 06:53:11 compute-1 ceph-mon[81728]: pgmap v272: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:11 compute-1 ceph-mon[81728]: 3.16 scrub starts
Jan 31 06:53:11 compute-1 ceph-mon[81728]: 3.16 scrub ok
Jan 31 06:53:11 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:11.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:11.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:12 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 31 06:53:12 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 31 06:53:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:12 compute-1 ceph-mon[81728]: pgmap v273: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:13 compute-1 ceph-mon[81728]: 5.1f scrub starts
Jan 31 06:53:13 compute-1 ceph-mon[81728]: 5.1f scrub ok
Jan 31 06:53:13 compute-1 ceph-mon[81728]: 9.c scrub starts
Jan 31 06:53:13 compute-1 ceph-mon[81728]: 9.c scrub ok
Jan 31 06:53:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:13 compute-1 ceph-mon[81728]: 4.14 scrub starts
Jan 31 06:53:13 compute-1 ceph-mon[81728]: 4.14 scrub ok
Jan 31 06:53:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:13.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:13 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:53:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:53:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:13.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:53:14 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 31 06:53:14 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 31 06:53:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:14 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:53:14 compute-1 ceph-mon[81728]: pgmap v274: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:15 compute-1 ceph-mon[81728]: 5.10 scrub starts
Jan 31 06:53:15 compute-1 ceph-mon[81728]: 5.10 scrub ok
Jan 31 06:53:15 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:15.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:53:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:15.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:53:16 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.6 deep-scrub starts
Jan 31 06:53:16 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.6 deep-scrub ok
Jan 31 06:53:16 compute-1 ceph-mon[81728]: 9.14 deep-scrub starts
Jan 31 06:53:16 compute-1 ceph-mon[81728]: 9.14 deep-scrub ok
Jan 31 06:53:16 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:16 compute-1 ceph-mon[81728]: 10.6 deep-scrub starts
Jan 31 06:53:16 compute-1 ceph-mon[81728]: 10.6 deep-scrub ok
Jan 31 06:53:16 compute-1 ceph-mon[81728]: pgmap v275: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:17 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:53:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:17.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:53:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:17.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:18 compute-1 sudo[88356]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:18 compute-1 sudo[89133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtkvqimuqllcaupxqqnmhxxpxivqzjac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842398.3652942-370-231621785674930/AnsiballZ_command.py'
Jan 31 06:53:18 compute-1 sudo[89133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:18 compute-1 python3.9[89135]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:53:18 compute-1 ceph-mon[81728]: 9.1c deep-scrub starts
Jan 31 06:53:18 compute-1 ceph-mon[81728]: 9.1c deep-scrub ok
Jan 31 06:53:18 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:18 compute-1 ceph-mon[81728]: pgmap v276: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:19 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:53:19 compute-1 sudo[89133]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:19.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:19 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:19 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:53:19 compute-1 ceph-mon[81728]: 11.2 scrub starts
Jan 31 06:53:19 compute-1 ceph-mon[81728]: 11.2 scrub ok
Jan 31 06:53:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:19.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:20 compute-1 sudo[89420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mszznmfveqwjyhdnushzombubsybmjgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842399.7017915-394-153281106367305/AnsiballZ_selinux.py'
Jan 31 06:53:20 compute-1 sudo[89420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:20 compute-1 python3.9[89422]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 06:53:20 compute-1 sudo[89420]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:21 compute-1 sudo[89572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbovafnvsixhspuqmttisbsyyorytrab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842400.9602277-427-43184933264242/AnsiballZ_command.py'
Jan 31 06:53:21 compute-1 sudo[89572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:21 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:21 compute-1 ceph-mon[81728]: pgmap v277: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:21 compute-1 ceph-mon[81728]: 11.6 scrub starts
Jan 31 06:53:21 compute-1 ceph-mon[81728]: 11.6 scrub ok
Jan 31 06:53:21 compute-1 python3.9[89574]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 06:53:21 compute-1 sudo[89572]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:21 compute-1 sudo[89724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egmphnkurkarhdznogyegnmnokkgkvmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842401.5795105-451-111556509702168/AnsiballZ_file.py'
Jan 31 06:53:21 compute-1 sudo[89724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:21.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:21 compute-1 python3.9[89726]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:53:21 compute-1 sudo[89724]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:53:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:21.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:53:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:22 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 31 06:53:22 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 31 06:53:22 compute-1 sudo[89876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctmiwcjwaornqsxranfbkufurznlpwyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842402.1787868-475-7607476039085/AnsiballZ_mount.py'
Jan 31 06:53:22 compute-1 sudo[89876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:22 compute-1 python3.9[89878]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 06:53:22 compute-1 sudo[89876]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:23 compute-1 ceph-mon[81728]: 6.1b deep-scrub starts
Jan 31 06:53:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:23 compute-1 ceph-mon[81728]: 6.1b deep-scrub ok
Jan 31 06:53:23 compute-1 ceph-mon[81728]: 10.7 scrub starts
Jan 31 06:53:23 compute-1 ceph-mon[81728]: pgmap v278: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:23 compute-1 ceph-mon[81728]: 10.7 scrub ok
Jan 31 06:53:23 compute-1 ceph-mon[81728]: 11.9 scrub starts
Jan 31 06:53:23 compute-1 ceph-mon[81728]: 11.9 scrub ok
Jan 31 06:53:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:23.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:23 compute-1 sudo[90028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqwvyawqbwbyphubnneoghhohbojfgdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842403.708879-559-118478607354888/AnsiballZ_file.py'
Jan 31 06:53:23 compute-1 sudo[90028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:23.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:24 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:53:24 compute-1 python3.9[90030]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:53:24 compute-1 sudo[90028]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:24 compute-1 ceph-mon[81728]: 5.0 scrub starts
Jan 31 06:53:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:24 compute-1 ceph-mon[81728]: 5.0 scrub ok
Jan 31 06:53:24 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:53:24 compute-1 ceph-mon[81728]: pgmap v279: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:24 compute-1 sudo[90180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suvaabvtwupajcscmofrihyffyddxcfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842404.3914537-583-42527625182258/AnsiballZ_stat.py'
Jan 31 06:53:24 compute-1 sudo[90180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:24 compute-1 python3.9[90182]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:53:24 compute-1 sudo[90180]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:24 compute-1 sudo[90258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuiddergaplhdnezsstqlruvxqshjzvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842404.3914537-583-42527625182258/AnsiballZ_file.py'
Jan 31 06:53:24 compute-1 sudo[90258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:25 compute-1 python3.9[90260]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:53:25 compute-1 sudo[90258]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:25 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 31 06:53:25 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 31 06:53:25 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:53:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:25.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:53:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:53:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:25.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:53:26 compute-1 systemd[72625]: Created slice User Background Tasks Slice.
Jan 31 06:53:26 compute-1 systemd[72625]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 06:53:26 compute-1 systemd[72625]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 06:53:26 compute-1 sudo[90411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anpkofpimrelkrmrotwxtlhtklrrhscp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842406.010714-646-212333097264799/AnsiballZ_stat.py'
Jan 31 06:53:26 compute-1 sudo[90411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:26 compute-1 python3.9[90413]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:53:26 compute-1 sudo[90411]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:26 compute-1 ceph-mon[81728]: 10.9 scrub starts
Jan 31 06:53:26 compute-1 ceph-mon[81728]: 10.9 scrub ok
Jan 31 06:53:26 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:26 compute-1 ceph-mon[81728]: pgmap v280: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:27 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 31 06:53:27 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 31 06:53:27 compute-1 sudo[90565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttpvowvlqqrfmpckahfwzfggveaksfkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842407.0499907-685-180256409427914/AnsiballZ_getent.py'
Jan 31 06:53:27 compute-1 sudo[90565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:27 compute-1 python3.9[90567]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 06:53:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:27 compute-1 sudo[90565]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:27.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:27.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:28 compute-1 sudo[90718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpwqqpiyxcpthjdbcdrxrjmsszvjvepm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842407.9547286-715-51793753956009/AnsiballZ_getent.py'
Jan 31 06:53:28 compute-1 sudo[90718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:28 compute-1 python3.9[90720]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 06:53:28 compute-1 sudo[90718]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:28 compute-1 ceph-mon[81728]: 10.a scrub starts
Jan 31 06:53:28 compute-1 ceph-mon[81728]: 10.a scrub ok
Jan 31 06:53:28 compute-1 ceph-mon[81728]: 6.1 scrub starts
Jan 31 06:53:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:28 compute-1 ceph-mon[81728]: 6.1 scrub ok
Jan 31 06:53:28 compute-1 ceph-mon[81728]: pgmap v281: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:28 compute-1 sudo[90871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeytckwmdvsymtmwwdzwlebrcjwwjakf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842408.5675006-739-144425364463110/AnsiballZ_group.py'
Jan 31 06:53:28 compute-1 sudo[90871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:29 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:53:29 compute-1 python3.9[90873]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 06:53:29 compute-1 sudo[90871]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:29 compute-1 sudo[91023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgmayhuefgdlfiqkxnthiyszfulalmkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842409.460547-766-253429487333612/AnsiballZ_file.py'
Jan 31 06:53:29 compute-1 sudo[91023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:29 compute-1 ceph-mon[81728]: 4.3 scrub starts
Jan 31 06:53:29 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:29 compute-1 ceph-mon[81728]: 4.3 scrub ok
Jan 31 06:53:29 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:53:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:29.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:29 compute-1 python3.9[91025]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 06:53:29 compute-1 sudo[91023]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:29.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:30 compute-1 sudo[91175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiyrzxmopeyajgoikppnyiwozgtzbzaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842410.303392-799-185248239027943/AnsiballZ_dnf.py'
Jan 31 06:53:30 compute-1 sudo[91175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:30 compute-1 ceph-mon[81728]: pgmap v282: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:30 compute-1 python3.9[91177]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:53:31 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:31.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:31.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:32 compute-1 sudo[91175]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:32 compute-1 sudo[91328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aowzuzradfobbutramthjqmsnlunbixl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842412.277447-823-263757290954887/AnsiballZ_file.py'
Jan 31 06:53:32 compute-1 sudo[91328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:32 compute-1 python3.9[91330]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:53:32 compute-1 sudo[91328]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:32 compute-1 ceph-mon[81728]: 11.b scrub starts
Jan 31 06:53:32 compute-1 ceph-mon[81728]: 11.b scrub ok
Jan 31 06:53:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:32 compute-1 ceph-mon[81728]: pgmap v283: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:33 compute-1 sudo[91480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqckqardwdzzbuavppchbpmrgdmsfkgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842412.896252-847-90575632541742/AnsiballZ_stat.py'
Jan 31 06:53:33 compute-1 sudo[91480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:33 compute-1 python3.9[91482]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:53:33 compute-1 sudo[91480]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:33 compute-1 sudo[91558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzqfsrluerrtkfvbwtocshriqqypzrfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842412.896252-847-90575632541742/AnsiballZ_file.py'
Jan 31 06:53:33 compute-1 sudo[91558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:33 compute-1 python3.9[91560]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:53:33 compute-1 sudo[91558]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:33 compute-1 ceph-mon[81728]: 7.1e deep-scrub starts
Jan 31 06:53:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:33 compute-1 ceph-mon[81728]: 7.1e deep-scrub ok
Jan 31 06:53:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:33.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:33.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:34 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:53:34 compute-1 sudo[91710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szsaggmnlrhfwggwipvhjxghyyzxiwqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842414.2293506-886-168778341587387/AnsiballZ_stat.py'
Jan 31 06:53:34 compute-1 sudo[91710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:34 compute-1 python3.9[91712]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:53:34 compute-1 sudo[91710]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:34 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:53:34 compute-1 ceph-mon[81728]: pgmap v284: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:34 compute-1 sudo[91788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irhorbvtospltotsanvserduhgbcayxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842414.2293506-886-168778341587387/AnsiballZ_file.py'
Jan 31 06:53:34 compute-1 sudo[91788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:35 compute-1 python3.9[91790]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:53:35 compute-1 sudo[91788]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:35 compute-1 sudo[91940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkzxnzuittnyxcxydzeffctnbaiivbfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842415.5623226-931-72122901876513/AnsiballZ_dnf.py'
Jan 31 06:53:35 compute-1 sudo[91940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:35.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:35 compute-1 python3.9[91942]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:53:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:35.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:36 compute-1 ceph-mon[81728]: pgmap v285: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:37 compute-1 sudo[91940]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:37.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:37 compute-1 ceph-mon[81728]: 6.1f scrub starts
Jan 31 06:53:37 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:37 compute-1 ceph-mon[81728]: 6.1f scrub ok
Jan 31 06:53:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:37.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:38 compute-1 python3.9[92093]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:53:38 compute-1 ceph-mon[81728]: 4.1d scrub starts
Jan 31 06:53:38 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:38 compute-1 ceph-mon[81728]: 4.1d scrub ok
Jan 31 06:53:38 compute-1 ceph-mon[81728]: 11.c scrub starts
Jan 31 06:53:38 compute-1 ceph-mon[81728]: 11.c scrub ok
Jan 31 06:53:38 compute-1 ceph-mon[81728]: pgmap v286: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:38 compute-1 python3.9[92245]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 06:53:39 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:53:39 compute-1 python3.9[92395]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:53:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:39.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:39.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:40 compute-1 ceph-mon[81728]: 2.1b scrub starts
Jan 31 06:53:40 compute-1 ceph-mon[81728]: 2.1b scrub ok
Jan 31 06:53:40 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:53:40 compute-1 sudo[92545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbybtvktchuvxfhohtbqsilsncspewqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842420.1768024-1054-179516358747280/AnsiballZ_systemd.py'
Jan 31 06:53:40 compute-1 sudo[92545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:41 compute-1 python3.9[92547]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:53:41 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 06:53:41 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 06:53:41 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 06:53:41 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 06:53:41 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:41 compute-1 ceph-mon[81728]: 11.d scrub starts
Jan 31 06:53:41 compute-1 ceph-mon[81728]: 11.d scrub ok
Jan 31 06:53:41 compute-1 ceph-mon[81728]: pgmap v287: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:41 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 06:53:41 compute-1 sudo[92545]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:41.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:42.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:42 compute-1 python3.9[92709]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 06:53:42 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 31 06:53:42 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 31 06:53:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:42 compute-1 ceph-mon[81728]: 2.15 scrub starts
Jan 31 06:53:42 compute-1 ceph-mon[81728]: 2.15 scrub ok
Jan 31 06:53:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:43 compute-1 ceph-mon[81728]: 10.b scrub starts
Jan 31 06:53:43 compute-1 ceph-mon[81728]: 10.b scrub ok
Jan 31 06:53:43 compute-1 ceph-mon[81728]: pgmap v288: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:43 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:43.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:44.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:44 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:53:44 compute-1 ceph-mon[81728]: 11.10 scrub starts
Jan 31 06:53:44 compute-1 ceph-mon[81728]: 11.10 scrub ok
Jan 31 06:53:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:44 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:53:44 compute-1 ceph-mon[81728]: pgmap v289: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:45 compute-1 sudo[92860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zphprszvxyoxaxieuigxqnknolkfschu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842425.0889914-1225-69554869070180/AnsiballZ_systemd.py'
Jan 31 06:53:45 compute-1 sudo[92860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:45 compute-1 python3.9[92862]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:53:45 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:45 compute-1 sudo[92860]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 06:53:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:45.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 06:53:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 06:53:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:46.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 06:53:46 compute-1 sudo[93014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viovreuapyjdnwvjgkmnpfwrarcnnxhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842425.8565197-1225-174178845960081/AnsiballZ_systemd.py'
Jan 31 06:53:46 compute-1 sudo[93014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:46 compute-1 python3.9[93016]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:53:46 compute-1 sudo[93014]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:46 compute-1 ceph-mon[81728]: 5.12 scrub starts
Jan 31 06:53:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:46 compute-1 ceph-mon[81728]: 5.12 scrub ok
Jan 31 06:53:46 compute-1 ceph-mon[81728]: pgmap v290: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:46 compute-1 sshd-session[86444]: Connection closed by 192.168.122.30 port 33086
Jan 31 06:53:46 compute-1 sshd-session[86441]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:53:46 compute-1 systemd[1]: session-34.scope: Deactivated successfully.
Jan 31 06:53:46 compute-1 systemd[1]: session-34.scope: Consumed 1min 2.867s CPU time.
Jan 31 06:53:46 compute-1 systemd-logind[788]: Session 34 logged out. Waiting for processes to exit.
Jan 31 06:53:47 compute-1 systemd-logind[788]: Removed session 34.
Jan 31 06:53:47 compute-1 ceph-mon[81728]: 11.11 deep-scrub starts
Jan 31 06:53:47 compute-1 ceph-mon[81728]: 11.11 deep-scrub ok
Jan 31 06:53:47 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:47 compute-1 ceph-mon[81728]: 7.1d scrub starts
Jan 31 06:53:47 compute-1 ceph-mon[81728]: 7.1d scrub ok
Jan 31 06:53:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 06:53:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:47.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 06:53:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:48.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:48 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 31 06:53:48 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 31 06:53:48 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:48 compute-1 ceph-mon[81728]: 10.c scrub starts
Jan 31 06:53:48 compute-1 ceph-mon[81728]: 10.c scrub ok
Jan 31 06:53:48 compute-1 ceph-mon[81728]: pgmap v291: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:49 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 31 06:53:49 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 31 06:53:49 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:53:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:49 compute-1 ceph-mon[81728]: 10.d scrub starts
Jan 31 06:53:49 compute-1 ceph-mon[81728]: 10.d scrub ok
Jan 31 06:53:49 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:53:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:49.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:50.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:50 compute-1 ceph-mon[81728]: 11.15 deep-scrub starts
Jan 31 06:53:50 compute-1 ceph-mon[81728]: 11.15 deep-scrub ok
Jan 31 06:53:50 compute-1 ceph-mon[81728]: pgmap v292: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:51 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 31 06:53:51 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 31 06:53:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:51.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:52.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:52 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:52 compute-1 ceph-mon[81728]: 10.e scrub starts
Jan 31 06:53:52 compute-1 ceph-mon[81728]: 10.e scrub ok
Jan 31 06:53:52 compute-1 ceph-mon[81728]: 11.18 scrub starts
Jan 31 06:53:52 compute-1 ceph-mon[81728]: 11.18 scrub ok
Jan 31 06:53:52 compute-1 sshd-session[93043]: Accepted publickey for zuul from 192.168.122.30 port 52826 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:53:52 compute-1 systemd-logind[788]: New session 35 of user zuul.
Jan 31 06:53:52 compute-1 systemd[1]: Started Session 35 of User zuul.
Jan 31 06:53:52 compute-1 sshd-session[93043]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:53:53 compute-1 ceph-mon[81728]: 7.10 deep-scrub starts
Jan 31 06:53:53 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:53 compute-1 ceph-mon[81728]: 7.10 deep-scrub ok
Jan 31 06:53:53 compute-1 ceph-mon[81728]: 11.1f scrub starts
Jan 31 06:53:53 compute-1 ceph-mon[81728]: 11.1f scrub ok
Jan 31 06:53:53 compute-1 ceph-mon[81728]: pgmap v293: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:53 compute-1 python3.9[93196]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:53:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:53.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:54.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:54 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:53:54 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:54 compute-1 ceph-mon[81728]: 10.1b scrub starts
Jan 31 06:53:54 compute-1 ceph-mon[81728]: 10.1b scrub ok
Jan 31 06:53:54 compute-1 sudo[93350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzsfvqsvlkjyncggnlomchmxdailzvxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842434.3579528-69-3947665979423/AnsiballZ_getent.py'
Jan 31 06:53:54 compute-1 sudo[93350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:54 compute-1 python3.9[93352]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 06:53:54 compute-1 sudo[93350]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:55 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 31 06:53:55 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 31 06:53:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:55 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:53:55 compute-1 ceph-mon[81728]: pgmap v294: 321 pgs: 1 active+clean+scrubbing, 1 active+clean+laggy, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:55 compute-1 sudo[93503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iabbcctvgccsbgxkdmuwbufzmtbgdxrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842435.2985086-105-264809096793898/AnsiballZ_setup.py'
Jan 31 06:53:55 compute-1 sudo[93503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:55 compute-1 python3.9[93505]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:53:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:55.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:56.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:56 compute-1 sudo[93503]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:56 compute-1 sudo[93587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sskuzmafwofmltyjzqjdbexpmgedbtqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842435.2985086-105-264809096793898/AnsiballZ_dnf.py'
Jan 31 06:53:56 compute-1 sudo[93587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:56 compute-1 ceph-mon[81728]: 6.13 deep-scrub starts
Jan 31 06:53:56 compute-1 ceph-mon[81728]: 6.13 deep-scrub ok
Jan 31 06:53:56 compute-1 ceph-mon[81728]: 10.16 scrub starts
Jan 31 06:53:56 compute-1 ceph-mon[81728]: 10.16 scrub ok
Jan 31 06:53:56 compute-1 ceph-mon[81728]: 10.18 scrub starts
Jan 31 06:53:56 compute-1 ceph-mon[81728]: 10.18 scrub ok
Jan 31 06:53:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:56 compute-1 python3.9[93589]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 06:53:57 compute-1 ceph-mon[81728]: pgmap v295: 321 pgs: 1 active+clean+scrubbing, 1 active+clean+laggy, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:57 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:57.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:53:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:53:58.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:53:58 compute-1 sudo[93587]: pam_unix(sudo:session): session closed for user root
Jan 31 06:53:58 compute-1 sudo[93740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhvcfhsurnddxlpnnctzfulrtfptvfzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842438.3500307-147-128710959647044/AnsiballZ_dnf.py'
Jan 31 06:53:58 compute-1 sudo[93740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:53:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:58 compute-1 ceph-mon[81728]: pgmap v296: 321 pgs: 1 active+clean+scrubbing, 1 active+clean+laggy, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:53:58 compute-1 python3.9[93742]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:53:59 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:53:59 compute-1 ceph-mon[81728]: 10.19 scrub starts
Jan 31 06:53:59 compute-1 ceph-mon[81728]: 10.19 scrub ok
Jan 31 06:53:59 compute-1 ceph-mon[81728]: 7.a scrub starts
Jan 31 06:53:59 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:53:59 compute-1 ceph-mon[81728]: 7.a scrub ok
Jan 31 06:53:59 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:53:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:53:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 06:53:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:53:59.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 06:54:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 06:54:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:00.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 06:54:00 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Jan 31 06:54:00 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Jan 31 06:54:00 compute-1 sudo[93740]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:00 compute-1 ceph-mon[81728]: pgmap v297: 321 pgs: 1 active+clean+laggy, 1 active+clean+scrubbing, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:00 compute-1 sudo[93893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhzfmnqgyoljzngwqkoqkfjvsitaafbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842440.3746128-171-252874642420261/AnsiballZ_systemd.py'
Jan 31 06:54:00 compute-1 sudo[93893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:01 compute-1 python3.9[93895]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 06:54:01 compute-1 sudo[93893]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:01 compute-1 ceph-mon[81728]: 10.17 scrub starts
Jan 31 06:54:01 compute-1 ceph-mon[81728]: 10.17 scrub ok
Jan 31 06:54:01 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:01.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:02.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:02 compute-1 sudo[94049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:54:02 compute-1 sudo[94049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:54:02 compute-1 sudo[94049]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:02 compute-1 sudo[94074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:54:02 compute-1 sudo[94074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:54:02 compute-1 sudo[94074]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:02 compute-1 sudo[94099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:54:02 compute-1 python3.9[94048]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:54:02 compute-1 sudo[94099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:54:02 compute-1 sudo[94099]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:02 compute-1 sudo[94124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 06:54:02 compute-1 sudo[94124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:54:02 compute-1 sudo[94383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpfkqamwkrizdfmrspnqjpzumetqhbzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842442.447283-225-82491483424159/AnsiballZ_sefcontext.py'
Jan 31 06:54:02 compute-1 sudo[94383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:03 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:03 compute-1 ceph-mon[81728]: pgmap v298: 321 pgs: 1 active+clean+laggy, 1 active+clean+scrubbing, 319 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:03 compute-1 podman[94296]: 2026-01-31 06:54:03.046461349 +0000 UTC m=+0.511087926 container exec 6672f2dbf6180479da9a6d4a3be2a0622c308e279fdc39713e442740f41af4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:54:03 compute-1 podman[94296]: 2026-01-31 06:54:03.14353012 +0000 UTC m=+0.608156687 container exec_died 6672f2dbf6180479da9a6d4a3be2a0622c308e279fdc39713e442740f41af4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:54:03 compute-1 python3.9[94385]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 06:54:03 compute-1 sudo[94383]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:03 compute-1 sudo[94124]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:03 compute-1 sudo[94574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:54:03 compute-1 sudo[94574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:54:03 compute-1 sudo[94574]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:03 compute-1 sudo[94622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:54:03 compute-1 sudo[94622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:54:03 compute-1 sudo[94622]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:03 compute-1 sudo[94647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:54:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:03.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:03 compute-1 sudo[94647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:54:03 compute-1 sudo[94647]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:03 compute-1 sudo[94696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 06:54:03 compute-1 sudo[94696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:54:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:04.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:04 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:54:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:54:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:54:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:54:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:54:04 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:54:04 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:54:04 compute-1 python3.9[94745]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:54:04 compute-1 sudo[94696]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:04 compute-1 sudo[94933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiohxdvelhqgjeqfjbrpqthmysqdhcnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842444.66573-279-250942457570427/AnsiballZ_dnf.py'
Jan 31 06:54:04 compute-1 sudo[94933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:05 compute-1 python3.9[94935]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:54:05 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:05 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 184 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:54:05 compute-1 ceph-mon[81728]: pgmap v299: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:54:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 06:54:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:54:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 06:54:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 06:54:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:54:05 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 31 06:54:05 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 31 06:54:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:05.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:06.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:06 compute-1 ceph-mon[81728]: 10.4 scrub starts
Jan 31 06:54:06 compute-1 ceph-mon[81728]: 10.4 scrub ok
Jan 31 06:54:06 compute-1 ceph-mon[81728]: 10.1a scrub starts
Jan 31 06:54:06 compute-1 ceph-mon[81728]: 10.1a scrub ok
Jan 31 06:54:06 compute-1 ceph-mon[81728]: 10.8 scrub starts
Jan 31 06:54:06 compute-1 ceph-mon[81728]: 10.8 scrub ok
Jan 31 06:54:06 compute-1 sudo[94933]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:07 compute-1 sudo[95086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onyxbmnxktqjrnzdxdnmpdzpiqctyhom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842446.7009158-303-263416658886290/AnsiballZ_command.py'
Jan 31 06:54:07 compute-1 sudo[95086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:07 compute-1 ceph-mon[81728]: 10.1e deep-scrub starts
Jan 31 06:54:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:07 compute-1 ceph-mon[81728]: 10.1e deep-scrub ok
Jan 31 06:54:07 compute-1 ceph-mon[81728]: pgmap v300: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:07 compute-1 ceph-mon[81728]: 10.5 scrub starts
Jan 31 06:54:07 compute-1 ceph-mon[81728]: 10.5 scrub ok
Jan 31 06:54:07 compute-1 python3.9[95088]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:54:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 31 06:54:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:07.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 31 06:54:07 compute-1 sudo[95086]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 06:54:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:08.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 06:54:08 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:08 compute-1 ceph-mon[81728]: 10.2 scrub starts
Jan 31 06:54:08 compute-1 ceph-mon[81728]: 10.2 scrub ok
Jan 31 06:54:08 compute-1 sudo[95373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goleirdqlxfgtntdjydphjofwmmyhwvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842448.1892169-327-83220754986620/AnsiballZ_file.py'
Jan 31 06:54:08 compute-1 sudo[95373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:08 compute-1 python3.9[95375]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 06:54:08 compute-1 sudo[95373]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:09 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:54:09 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 31 06:54:09 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 31 06:54:09 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:09 compute-1 ceph-mon[81728]: pgmap v301: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:09 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:54:09 compute-1 python3.9[95525]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:54:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 06:54:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:09.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 06:54:09 compute-1 sudo[95677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smsxsgonbbgjckuqnnuxaaeiujljwitz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842449.6703894-375-237820199295091/AnsiballZ_dnf.py'
Jan 31 06:54:09 compute-1 sudo[95677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:10.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:10 compute-1 python3.9[95679]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:54:10 compute-1 ceph-mon[81728]: 10.11 scrub starts
Jan 31 06:54:10 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:10 compute-1 ceph-mon[81728]: 10.11 scrub ok
Jan 31 06:54:10 compute-1 ceph-mon[81728]: 10.1c scrub starts
Jan 31 06:54:10 compute-1 ceph-mon[81728]: 10.1c scrub ok
Jan 31 06:54:10 compute-1 sudo[95681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:54:10 compute-1 sudo[95681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:54:10 compute-1 sudo[95681]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:10 compute-1 sudo[95706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:54:10 compute-1 sudo[95706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:54:10 compute-1 sudo[95706]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:11 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 31 06:54:11 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 31 06:54:11 compute-1 ceph-mon[81728]: 10.10 scrub starts
Jan 31 06:54:11 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:11 compute-1 ceph-mon[81728]: 10.10 scrub ok
Jan 31 06:54:11 compute-1 ceph-mon[81728]: pgmap v302: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:11 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:54:11 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:54:11 compute-1 sudo[95677]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:11.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:11 compute-1 sudo[95880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qibjfiwphwwdyfvgtlfrsvjmawmbcbhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842451.7233412-402-152313205688157/AnsiballZ_dnf.py'
Jan 31 06:54:11 compute-1 sudo[95880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:12.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:12 compute-1 python3.9[95882]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:54:12 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 31 06:54:12 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 31 06:54:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:12 compute-1 ceph-mon[81728]: 10.1d scrub starts
Jan 31 06:54:12 compute-1 ceph-mon[81728]: 10.1d scrub ok
Jan 31 06:54:12 compute-1 ceph-mon[81728]: 10.15 scrub starts
Jan 31 06:54:12 compute-1 ceph-mon[81728]: 10.15 scrub ok
Jan 31 06:54:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:12 compute-1 ceph-mon[81728]: 10.12 scrub starts
Jan 31 06:54:12 compute-1 ceph-mon[81728]: 10.12 scrub ok
Jan 31 06:54:13 compute-1 ceph-mon[81728]: 10.1f scrub starts
Jan 31 06:54:13 compute-1 ceph-mon[81728]: 10.1f scrub ok
Jan 31 06:54:13 compute-1 ceph-mon[81728]: pgmap v303: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:13 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 31 06:54:13 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 31 06:54:13 compute-1 sudo[95880]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:13.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:14.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:14 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:54:14 compute-1 sudo[96033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atkalllhjneoixnntoeejpnfyucvdjcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842453.9316936-438-190649840489026/AnsiballZ_stat.py'
Jan 31 06:54:14 compute-1 sudo[96033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:14 compute-1 ceph-mon[81728]: 8.14 scrub starts
Jan 31 06:54:14 compute-1 ceph-mon[81728]: 8.14 scrub ok
Jan 31 06:54:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:14 compute-1 ceph-mon[81728]: 10.3 scrub starts
Jan 31 06:54:14 compute-1 ceph-mon[81728]: 10.3 scrub ok
Jan 31 06:54:14 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:54:14 compute-1 python3.9[96035]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:54:14 compute-1 sudo[96033]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:15 compute-1 sudo[96187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsbidffltuwmzutdhusocmulfnszoszp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842454.7035096-462-120443657611654/AnsiballZ_slurp.py'
Jan 31 06:54:15 compute-1 sudo[96187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:15 compute-1 python3.9[96189]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 31 06:54:15 compute-1 sudo[96187]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:15 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 31 06:54:15 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 31 06:54:15 compute-1 ceph-mon[81728]: pgmap v304: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:15 compute-1 ceph-mon[81728]: 10.14 deep-scrub starts
Jan 31 06:54:15 compute-1 ceph-mon[81728]: 10.14 deep-scrub ok
Jan 31 06:54:15 compute-1 ceph-mon[81728]: 10.1 scrub starts
Jan 31 06:54:15 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:15 compute-1 ceph-mon[81728]: 10.1 scrub ok
Jan 31 06:54:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:15.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:16.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:16 compute-1 sshd-session[93046]: Connection closed by 192.168.122.30 port 52826
Jan 31 06:54:16 compute-1 sshd-session[93043]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:54:16 compute-1 systemd[1]: session-35.scope: Deactivated successfully.
Jan 31 06:54:16 compute-1 systemd[1]: session-35.scope: Consumed 16.195s CPU time.
Jan 31 06:54:16 compute-1 systemd-logind[788]: Session 35 logged out. Waiting for processes to exit.
Jan 31 06:54:16 compute-1 systemd-logind[788]: Removed session 35.
Jan 31 06:54:16 compute-1 ceph-mon[81728]: 11.12 scrub starts
Jan 31 06:54:16 compute-1 ceph-mon[81728]: 11.12 scrub ok
Jan 31 06:54:16 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:16 compute-1 ceph-mon[81728]: 10.f scrub starts
Jan 31 06:54:16 compute-1 ceph-mon[81728]: 10.f scrub ok
Jan 31 06:54:16 compute-1 ceph-mon[81728]: pgmap v305: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:17 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.14 deep-scrub starts
Jan 31 06:54:17 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.14 deep-scrub ok
Jan 31 06:54:17 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:17 compute-1 ceph-mon[81728]: 11.14 deep-scrub starts
Jan 31 06:54:17 compute-1 ceph-mon[81728]: 11.14 deep-scrub ok
Jan 31 06:54:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:17.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:18.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:18 compute-1 ceph-mon[81728]: 8.5 scrub starts
Jan 31 06:54:18 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:18 compute-1 ceph-mon[81728]: 8.5 scrub ok
Jan 31 06:54:18 compute-1 ceph-mon[81728]: pgmap v306: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:19 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:54:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:19.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:19 compute-1 ceph-mon[81728]: 11.17 deep-scrub starts
Jan 31 06:54:19 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:19 compute-1 ceph-mon[81728]: 11.17 deep-scrub ok
Jan 31 06:54:19 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:54:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:20.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:20 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:20 compute-1 ceph-mon[81728]: pgmap v307: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:21 compute-1 sshd-session[96215]: Accepted publickey for zuul from 192.168.122.30 port 35180 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:54:21 compute-1 systemd-logind[788]: New session 36 of user zuul.
Jan 31 06:54:21 compute-1 systemd[1]: Started Session 36 of User zuul.
Jan 31 06:54:21 compute-1 sshd-session[96215]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:54:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:21.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:21 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:22.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:22 compute-1 python3.9[96368]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:54:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:22 compute-1 ceph-mon[81728]: pgmap v308: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:23 compute-1 python3.9[96522]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:54:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:23.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 06:54:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:24.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 06:54:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:24 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:54:24 compute-1 python3.9[96715]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:54:25 compute-1 sshd-session[96218]: Connection closed by 192.168.122.30 port 35180
Jan 31 06:54:25 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:25 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:54:25 compute-1 ceph-mon[81728]: pgmap v309: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:25 compute-1 ceph-mon[81728]: 10.13 scrub starts
Jan 31 06:54:25 compute-1 ceph-mon[81728]: 10.13 scrub ok
Jan 31 06:54:25 compute-1 sshd-session[96215]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:54:25 compute-1 systemd[1]: session-36.scope: Deactivated successfully.
Jan 31 06:54:25 compute-1 systemd[1]: session-36.scope: Consumed 1.903s CPU time.
Jan 31 06:54:25 compute-1 systemd-logind[788]: Session 36 logged out. Waiting for processes to exit.
Jan 31 06:54:25 compute-1 systemd-logind[788]: Removed session 36.
Jan 31 06:54:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:25.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:26.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:26 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:27 compute-1 ceph-mon[81728]: pgmap v310: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:27 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.10 deep-scrub starts
Jan 31 06:54:27 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.10 deep-scrub ok
Jan 31 06:54:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:27.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:28.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:28 compute-1 ceph-mon[81728]: 8.16 deep-scrub starts
Jan 31 06:54:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:28 compute-1 ceph-mon[81728]: 8.16 deep-scrub ok
Jan 31 06:54:28 compute-1 ceph-mon[81728]: 8.10 deep-scrub starts
Jan 31 06:54:28 compute-1 ceph-mon[81728]: 8.10 deep-scrub ok
Jan 31 06:54:28 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 31 06:54:28 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 31 06:54:29 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:29 compute-1 ceph-mon[81728]: 11.1 scrub starts
Jan 31 06:54:29 compute-1 ceph-mon[81728]: pgmap v311: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:29 compute-1 ceph-mon[81728]: 11.1 scrub ok
Jan 31 06:54:29 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:54:29 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:54:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:29.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:30.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:30 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 31 06:54:30 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 31 06:54:30 compute-1 sshd-session[96741]: Accepted publickey for zuul from 192.168.122.30 port 41134 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:54:30 compute-1 systemd-logind[788]: New session 37 of user zuul.
Jan 31 06:54:30 compute-1 systemd[1]: Started Session 37 of User zuul.
Jan 31 06:54:30 compute-1 sshd-session[96741]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:54:31 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:31 compute-1 ceph-mon[81728]: 8.8 scrub starts
Jan 31 06:54:31 compute-1 ceph-mon[81728]: pgmap v312: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:31 compute-1 ceph-mon[81728]: 8.8 scrub ok
Jan 31 06:54:31 compute-1 ceph-mon[81728]: 9.19 scrub starts
Jan 31 06:54:31 compute-1 ceph-mon[81728]: 9.19 scrub ok
Jan 31 06:54:31 compute-1 python3.9[96894]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:54:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 06:54:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:31.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 06:54:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 06:54:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:32.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 06:54:32 compute-1 ceph-mon[81728]: 11.16 scrub starts
Jan 31 06:54:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:32 compute-1 ceph-mon[81728]: 11.16 scrub ok
Jan 31 06:54:32 compute-1 ceph-mon[81728]: 8.2 scrub starts
Jan 31 06:54:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:32 compute-1 ceph-mon[81728]: 8.2 scrub ok
Jan 31 06:54:32 compute-1 python3.9[97048]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:54:33 compute-1 ceph-mon[81728]: pgmap v313: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:33 compute-1 ceph-mon[81728]: 9.1a deep-scrub starts
Jan 31 06:54:33 compute-1 ceph-mon[81728]: 9.1a deep-scrub ok
Jan 31 06:54:33 compute-1 ceph-mon[81728]: 11.13 scrub starts
Jan 31 06:54:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:33 compute-1 ceph-mon[81728]: 11.13 scrub ok
Jan 31 06:54:33 compute-1 sudo[97202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftixzlawkhwvmxhblhrbpnaxfxqpoqnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842473.1651971-81-220129027888258/AnsiballZ_setup.py'
Jan 31 06:54:33 compute-1 sudo[97202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:33 compute-1 python3.9[97204]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:54:33 compute-1 sudo[97202]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:33.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:34.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:34 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:54:34 compute-1 sudo[97286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsmqyyyhsofqssxcrbjjwcgihnzymqnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842473.1651971-81-220129027888258/AnsiballZ_dnf.py'
Jan 31 06:54:34 compute-1 sudo[97286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:34 compute-1 ceph-mon[81728]: 8.11 deep-scrub starts
Jan 31 06:54:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:34 compute-1 ceph-mon[81728]: 8.11 deep-scrub ok
Jan 31 06:54:34 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 214 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:54:34 compute-1 python3.9[97288]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:54:35 compute-1 ceph-mon[81728]: pgmap v314: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:35 compute-1 sudo[97286]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:35.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:36.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:36 compute-1 ceph-mon[81728]: 9.1b scrub starts
Jan 31 06:54:36 compute-1 ceph-mon[81728]: 9.1b scrub ok
Jan 31 06:54:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:36 compute-1 sudo[97439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmzvdphrbaeasresqdrbxmnlhfvcdjdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842476.1539054-117-83739246943232/AnsiballZ_setup.py'
Jan 31 06:54:36 compute-1 sudo[97439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:36 compute-1 python3.9[97441]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:54:36 compute-1 sudo[97439]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:37 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 31 06:54:37 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 31 06:54:37 compute-1 ceph-mon[81728]: pgmap v315: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:37 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:37 compute-1 sudo[97634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuzpssoxlyjhocrjmvfbiwwfdsjzhirf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842477.2022398-150-56994736779471/AnsiballZ_file.py'
Jan 31 06:54:37 compute-1 sudo[97634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:37 compute-1 python3.9[97636]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:54:37 compute-1 sudo[97634]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:37.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:38.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:38 compute-1 sudo[97786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ganetmgvmejdkdxqmyqrwnnrqpedpppu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842477.940459-174-209567950510357/AnsiballZ_command.py'
Jan 31 06:54:38 compute-1 sudo[97786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:38 compute-1 python3.9[97788]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:54:38 compute-1 sudo[97786]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:38 compute-1 ceph-mon[81728]: 8.17 scrub starts
Jan 31 06:54:38 compute-1 ceph-mon[81728]: 8.17 scrub ok
Jan 31 06:54:38 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:38 compute-1 ceph-mon[81728]: pgmap v316: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:39 compute-1 sudo[97951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzfjjrhjifhescuqakufakswueflvawu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842478.8035357-198-162129760176267/AnsiballZ_stat.py'
Jan 31 06:54:39 compute-1 sudo[97951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:39 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:54:39 compute-1 python3.9[97953]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:54:39 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 31 06:54:39 compute-1 sudo[97951]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:39 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 31 06:54:39 compute-1 sudo[98029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wledkvjrhskklbwxjlgjorgtzvxtyprm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842478.8035357-198-162129760176267/AnsiballZ_file.py'
Jan 31 06:54:39 compute-1 sudo[98029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:39 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:39 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:54:39 compute-1 python3.9[98031]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:54:39 compute-1 sudo[98029]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:54:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:39.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:54:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:40.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:40 compute-1 sudo[98181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tttijdpfkpluigfvsfoyzpdkuicxqlrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842479.9777696-234-248361112683236/AnsiballZ_stat.py'
Jan 31 06:54:40 compute-1 sudo[98181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:40 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 31 06:54:40 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 31 06:54:40 compute-1 python3.9[98183]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:54:40 compute-1 sudo[98181]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:40 compute-1 ceph-mon[81728]: 9.1e deep-scrub starts
Jan 31 06:54:40 compute-1 ceph-mon[81728]: 9.1e deep-scrub ok
Jan 31 06:54:40 compute-1 ceph-mon[81728]: 11.f scrub starts
Jan 31 06:54:40 compute-1 ceph-mon[81728]: 11.f scrub ok
Jan 31 06:54:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:40 compute-1 ceph-mon[81728]: pgmap v317: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:40 compute-1 sudo[98259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikvuwvcyqkvdvrmnfqzzwaqiwsozbwzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842479.9777696-234-248361112683236/AnsiballZ_file.py'
Jan 31 06:54:40 compute-1 sudo[98259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:40 compute-1 python3.9[98261]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:54:40 compute-1 sudo[98259]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:41 compute-1 sudo[98411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tysiicvncwnfncjhwlhvhlcfwakvvweh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842481.0616043-273-25647001354309/AnsiballZ_ini_file.py'
Jan 31 06:54:41 compute-1 sudo[98411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:41 compute-1 python3.9[98413]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:54:41 compute-1 ceph-mon[81728]: 11.5 scrub starts
Jan 31 06:54:41 compute-1 ceph-mon[81728]: 11.5 scrub ok
Jan 31 06:54:41 compute-1 ceph-mon[81728]: 9.1f scrub starts
Jan 31 06:54:41 compute-1 ceph-mon[81728]: 9.1f scrub ok
Jan 31 06:54:41 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:41 compute-1 sudo[98411]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:54:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:41.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:54:41 compute-1 sudo[98563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-petlqivytpvubjbkwwojpfydiwcjmiim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842481.7431047-273-81131015677598/AnsiballZ_ini_file.py'
Jan 31 06:54:41 compute-1 sudo[98563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:54:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:42.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:54:42 compute-1 python3.9[98565]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:54:42 compute-1 sudo[98563]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:42 compute-1 sudo[98715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccmmollsvchxzvogzokepdxadnwmvuoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842482.3411977-273-175443552740019/AnsiballZ_ini_file.py'
Jan 31 06:54:42 compute-1 sudo[98715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:42 compute-1 ceph-mon[81728]: pgmap v318: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:42 compute-1 python3.9[98717]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:54:42 compute-1 sudo[98715]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:43 compute-1 sudo[98867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-datctimpugjzqswkggbjxznddjczsogk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842482.9088762-273-51360766018284/AnsiballZ_ini_file.py'
Jan 31 06:54:43 compute-1 sudo[98867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:43 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 31 06:54:43 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 31 06:54:43 compute-1 python3.9[98869]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:54:43 compute-1 sudo[98867]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:43 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:43 compute-1 sudo[99019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puhoienyarmfspvrtdgejvpnezsqfuad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842483.6098611-366-242260863397658/AnsiballZ_dnf.py'
Jan 31 06:54:43 compute-1 sudo[99019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:43.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:44 compute-1 python3.9[99021]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:54:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:54:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:44.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:54:44 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:54:44 compute-1 ceph-mon[81728]: 8.4 scrub starts
Jan 31 06:54:44 compute-1 ceph-mon[81728]: 8.4 scrub ok
Jan 31 06:54:44 compute-1 ceph-mon[81728]: 8.15 scrub starts
Jan 31 06:54:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:44 compute-1 ceph-mon[81728]: 8.15 scrub ok
Jan 31 06:54:44 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:54:44 compute-1 ceph-mon[81728]: pgmap v319: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:45 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.4 deep-scrub starts
Jan 31 06:54:45 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.4 deep-scrub ok
Jan 31 06:54:45 compute-1 sudo[99019]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:45 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:45.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:54:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:46.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:54:46 compute-1 sudo[99172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgdvkkjorartoodekihuyamwqtkteoxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842485.9288456-399-148567449932823/AnsiballZ_setup.py'
Jan 31 06:54:46 compute-1 sudo[99172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:46 compute-1 python3.9[99174]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:54:46 compute-1 sudo[99172]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:46 compute-1 ceph-mon[81728]: 11.4 deep-scrub starts
Jan 31 06:54:46 compute-1 ceph-mon[81728]: 11.4 deep-scrub ok
Jan 31 06:54:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:46 compute-1 ceph-mon[81728]: pgmap v320: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:46 compute-1 sudo[99326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqdtusevzaaeerflablutgdtkahprbwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842486.6786375-423-234307579838996/AnsiballZ_stat.py'
Jan 31 06:54:46 compute-1 sudo[99326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:47 compute-1 python3.9[99328]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:54:47 compute-1 sudo[99326]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:47 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.7 deep-scrub starts
Jan 31 06:54:47 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.7 deep-scrub ok
Jan 31 06:54:47 compute-1 sudo[99478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nknvdsongisrihrrwdictnuumhqxajhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842487.3766694-450-53018653717903/AnsiballZ_stat.py'
Jan 31 06:54:47 compute-1 sudo[99478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:47 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:47 compute-1 ceph-mon[81728]: 11.7 deep-scrub starts
Jan 31 06:54:47 compute-1 ceph-mon[81728]: 11.7 deep-scrub ok
Jan 31 06:54:47 compute-1 python3.9[99480]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:54:47 compute-1 sudo[99478]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:54:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:47.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:54:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:48.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:48 compute-1 sudo[99630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flzqlnjsymhvpyfhbjubxveezruznkaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842488.1337276-480-176232291876644/AnsiballZ_command.py'
Jan 31 06:54:48 compute-1 sudo[99630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:48 compute-1 python3.9[99632]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:54:48 compute-1 sudo[99630]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:48 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:48 compute-1 ceph-mon[81728]: pgmap v321: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:49 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:54:49 compute-1 sudo[99783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrqcmopcasetjcwanzidiqhndqebdeqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842488.8815744-510-17728061955950/AnsiballZ_service_facts.py'
Jan 31 06:54:49 compute-1 sudo[99783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:49 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 31 06:54:49 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 31 06:54:49 compute-1 python3.9[99785]: ansible-service_facts Invoked
Jan 31 06:54:49 compute-1 network[99802]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 06:54:49 compute-1 network[99803]: 'network-scripts' will be removed from distribution in near future.
Jan 31 06:54:49 compute-1 network[99804]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 06:54:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:49 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:54:49 compute-1 ceph-mon[81728]: 8.1b scrub starts
Jan 31 06:54:49 compute-1 ceph-mon[81728]: 8.1b scrub ok
Jan 31 06:54:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:49.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:50.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:50 compute-1 ceph-mon[81728]: 8.3 scrub starts
Jan 31 06:54:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:50 compute-1 ceph-mon[81728]: 8.3 scrub ok
Jan 31 06:54:50 compute-1 ceph-mon[81728]: pgmap v322: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:51 compute-1 ceph-mon[81728]: 11.a scrub starts
Jan 31 06:54:51 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:51 compute-1 ceph-mon[81728]: 11.a scrub ok
Jan 31 06:54:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:54:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:51.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:54:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:52.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1a deep-scrub starts
Jan 31 06:54:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1a deep-scrub ok
Jan 31 06:54:52 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:52 compute-1 ceph-mon[81728]: pgmap v323: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:52 compute-1 ceph-mon[81728]: 11.1a deep-scrub starts
Jan 31 06:54:52 compute-1 ceph-mon[81728]: 11.1a deep-scrub ok
Jan 31 06:54:53 compute-1 sudo[99783]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:53 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:54:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:53.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:54:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:54.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:54 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:54:54 compute-1 sudo[100087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpdkctlntcsfxethfbdtgwyrsjgilanm ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769842494.442451-555-46865889779201/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769842494.442451-555-46865889779201/args'
Jan 31 06:54:54 compute-1 sudo[100087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:54 compute-1 sudo[100087]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:54 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:54 compute-1 ceph-mon[81728]: 8.9 scrub starts
Jan 31 06:54:54 compute-1 ceph-mon[81728]: 8.9 scrub ok
Jan 31 06:54:54 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 234 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:54:54 compute-1 ceph-mon[81728]: pgmap v324: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:55 compute-1 sudo[100254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqtnrsxxfowzujeehbvyganwzzcxrfxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842495.107455-588-212033648513260/AnsiballZ_dnf.py'
Jan 31 06:54:55 compute-1 sudo[100254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:55 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 31 06:54:55 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 31 06:54:55 compute-1 python3.9[100256]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:54:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:55 compute-1 ceph-mon[81728]: 11.1b scrub starts
Jan 31 06:54:55 compute-1 ceph-mon[81728]: 11.1b scrub ok
Jan 31 06:54:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:55.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:56.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:56 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.18 deep-scrub starts
Jan 31 06:54:56 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.18 deep-scrub ok
Jan 31 06:54:56 compute-1 sudo[100254]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:56 compute-1 ceph-mon[81728]: pgmap v325: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:56 compute-1 ceph-mon[81728]: 8.18 deep-scrub starts
Jan 31 06:54:56 compute-1 ceph-mon[81728]: 8.18 deep-scrub ok
Jan 31 06:54:57 compute-1 ceph-mon[81728]: 8.c scrub starts
Jan 31 06:54:57 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:57 compute-1 ceph-mon[81728]: 8.c scrub ok
Jan 31 06:54:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:57.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:54:58.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:58 compute-1 sudo[100407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fguvnbfmbtetfeupmchbswtqanepdhwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842497.5042455-627-101698717153002/AnsiballZ_package_facts.py'
Jan 31 06:54:58 compute-1 sudo[100407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:58 compute-1 python3.9[100409]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 06:54:58 compute-1 sudo[100407]: pam_unix(sudo:session): session closed for user root
Jan 31 06:54:58 compute-1 ceph-mon[81728]: 8.f scrub starts
Jan 31 06:54:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:58 compute-1 ceph-mon[81728]: 8.f scrub ok
Jan 31 06:54:58 compute-1 ceph-mon[81728]: pgmap v326: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:54:59 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:54:59 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 31 06:54:59 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 31 06:54:59 compute-1 sudo[100559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drlverqdhopkxczkyupmrjadxmedimgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842499.417289-658-277713879515573/AnsiballZ_stat.py'
Jan 31 06:54:59 compute-1 sudo[100559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:54:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:54:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:54:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:54:59.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:54:59 compute-1 ceph-mon[81728]: 8.a scrub starts
Jan 31 06:54:59 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:54:59 compute-1 ceph-mon[81728]: 8.a scrub ok
Jan 31 06:54:59 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:54:59 compute-1 ceph-mon[81728]: 11.1d scrub starts
Jan 31 06:54:59 compute-1 ceph-mon[81728]: 11.1d scrub ok
Jan 31 06:55:00 compute-1 python3.9[100561]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:55:00 compute-1 sudo[100559]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:55:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:00.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:55:00 compute-1 sudo[100637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxapgsqhmvcerprxpicocghjqadieubw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842499.417289-658-277713879515573/AnsiballZ_file.py'
Jan 31 06:55:00 compute-1 sudo[100637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:00 compute-1 python3.9[100639]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:00 compute-1 sudo[100637]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:00 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 31 06:55:00 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 31 06:55:00 compute-1 sudo[100789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vokdzncomnheqelhoxooasbnmhkuczyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842500.6163397-694-162121092449816/AnsiballZ_stat.py'
Jan 31 06:55:00 compute-1 sudo[100789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:00 compute-1 ceph-mon[81728]: 8.d scrub starts
Jan 31 06:55:00 compute-1 ceph-mon[81728]: 8.d scrub ok
Jan 31 06:55:00 compute-1 ceph-mon[81728]: pgmap v327: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:00 compute-1 ceph-mon[81728]: 8.19 scrub starts
Jan 31 06:55:00 compute-1 ceph-mon[81728]: 8.19 scrub ok
Jan 31 06:55:01 compute-1 python3.9[100791]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:55:01 compute-1 sudo[100789]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:01 compute-1 sudo[100867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xczrbluowlivwcijwifzxzijqvlhktzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842500.6163397-694-162121092449816/AnsiballZ_file.py'
Jan 31 06:55:01 compute-1 sudo[100867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:01 compute-1 python3.9[100869]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:01 compute-1 sudo[100867]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:01 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 31 06:55:01 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 31 06:55:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:01.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:02 compute-1 ceph-mon[81728]: 11.3 scrub starts
Jan 31 06:55:02 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:02 compute-1 ceph-mon[81728]: 11.3 scrub ok
Jan 31 06:55:02 compute-1 ceph-mon[81728]: 11.1c scrub starts
Jan 31 06:55:02 compute-1 ceph-mon[81728]: 11.1c scrub ok
Jan 31 06:55:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:02.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:02 compute-1 sudo[101019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sueevzvzglnyrtnacbkmvjyusayoabgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842502.6076741-748-90672419108773/AnsiballZ_lineinfile.py'
Jan 31 06:55:02 compute-1 sudo[101019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:03 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:03 compute-1 ceph-mon[81728]: pgmap v328: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:03 compute-1 python3.9[101021]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:03 compute-1 sudo[101019]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:03 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 31 06:55:03 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 31 06:55:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:55:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:03.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:55:04 compute-1 ceph-mon[81728]: 8.b scrub starts
Jan 31 06:55:04 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:04 compute-1 ceph-mon[81728]: 8.b scrub ok
Jan 31 06:55:04 compute-1 ceph-mon[81728]: 11.1e scrub starts
Jan 31 06:55:04 compute-1 ceph-mon[81728]: 11.1e scrub ok
Jan 31 06:55:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:55:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:04.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:55:04 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:55:04 compute-1 sudo[101171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwgbnbxwhapgrxmgapjeadnfpohyjxyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842504.2029765-792-63643639489434/AnsiballZ_setup.py'
Jan 31 06:55:04 compute-1 sudo[101171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:04 compute-1 python3.9[101173]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:55:04 compute-1 sudo[101171]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:05 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:05 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:55:05 compute-1 ceph-mon[81728]: pgmap v329: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:05 compute-1 sudo[101255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emvvoovhukhgvytcshaaifbvtuqqnbne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842504.2029765-792-63643639489434/AnsiballZ_systemd.py'
Jan 31 06:55:05 compute-1 sudo[101255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:05 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.12 deep-scrub starts
Jan 31 06:55:05 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.12 deep-scrub ok
Jan 31 06:55:05 compute-1 python3.9[101257]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:55:05 compute-1 sudo[101255]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:55:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:05.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:55:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:06.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:06 compute-1 ceph-mon[81728]: 8.12 deep-scrub starts
Jan 31 06:55:06 compute-1 ceph-mon[81728]: 8.12 deep-scrub ok
Jan 31 06:55:06 compute-1 sshd-session[96744]: Connection closed by 192.168.122.30 port 41134
Jan 31 06:55:06 compute-1 sshd-session[96741]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:55:06 compute-1 systemd[1]: session-37.scope: Deactivated successfully.
Jan 31 06:55:06 compute-1 systemd[1]: session-37.scope: Consumed 20.243s CPU time.
Jan 31 06:55:06 compute-1 systemd-logind[788]: Session 37 logged out. Waiting for processes to exit.
Jan 31 06:55:06 compute-1 systemd-logind[788]: Removed session 37.
Jan 31 06:55:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:07 compute-1 ceph-mon[81728]: pgmap v330: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:07 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 31 06:55:07 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 31 06:55:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:07.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:08.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:08 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:08 compute-1 ceph-mon[81728]: 9.6 scrub starts
Jan 31 06:55:08 compute-1 ceph-mon[81728]: 9.6 scrub ok
Jan 31 06:55:08 compute-1 ceph-mon[81728]: 11.8 scrub starts
Jan 31 06:55:08 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:08 compute-1 ceph-mon[81728]: 11.8 scrub ok
Jan 31 06:55:08 compute-1 ceph-mon[81728]: pgmap v331: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:09 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:55:09 compute-1 ceph-mon[81728]: 11.e scrub starts
Jan 31 06:55:09 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:09 compute-1 ceph-mon[81728]: 11.e scrub ok
Jan 31 06:55:09 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:55:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:09.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:10.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:10 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 31 06:55:10 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 31 06:55:10 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:10 compute-1 ceph-mon[81728]: pgmap v332: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:10 compute-1 ceph-mon[81728]: 9.e scrub starts
Jan 31 06:55:10 compute-1 ceph-mon[81728]: 9.e scrub ok
Jan 31 06:55:11 compute-1 sudo[101285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:55:11 compute-1 sudo[101285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:55:11 compute-1 sudo[101285]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:11 compute-1 sudo[101310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:55:11 compute-1 sudo[101310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:55:11 compute-1 sudo[101310]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:11 compute-1 sudo[101335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:55:11 compute-1 sudo[101335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:55:11 compute-1 sudo[101335]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:11 compute-1 sudo[101360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 06:55:11 compute-1 sudo[101360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:55:11 compute-1 sudo[101360]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:11 compute-1 sshd-session[101416]: Accepted publickey for zuul from 192.168.122.30 port 48346 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:55:11 compute-1 systemd-logind[788]: New session 38 of user zuul.
Jan 31 06:55:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:11.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:11 compute-1 systemd[1]: Started Session 38 of User zuul.
Jan 31 06:55:11 compute-1 sshd-session[101416]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:55:12 compute-1 ceph-mon[81728]: 11.19 scrub starts
Jan 31 06:55:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:12 compute-1 ceph-mon[81728]: 11.19 scrub ok
Jan 31 06:55:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:55:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:12.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:55:12 compute-1 sudo[101569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppxozbmkuezeunwvkcrsngcadwtvswjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842512.055072-27-176799191827255/AnsiballZ_file.py'
Jan 31 06:55:12 compute-1 sudo[101569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:12 compute-1 python3.9[101571]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:12 compute-1 sudo[101569]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:13 compute-1 ceph-mon[81728]: pgmap v333: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:13 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:55:13 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:55:13 compute-1 sudo[101723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcqawxxopntbgyxqpnproyojsivvkwjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842512.9629345-63-124855578494098/AnsiballZ_stat.py'
Jan 31 06:55:13 compute-1 sudo[101723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:13 compute-1 python3.9[101725]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:55:13 compute-1 sudo[101723]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:13 compute-1 sudo[101801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbvkgssqegohqpeogdwtoetdmemqbyfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842512.9629345-63-124855578494098/AnsiballZ_file.py'
Jan 31 06:55:13 compute-1 sudo[101801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:13 compute-1 sshd-session[101695]: Invalid user solv from 2.57.122.238 port 48162
Jan 31 06:55:13 compute-1 sshd-session[101695]: Connection closed by invalid user solv 2.57.122.238 port 48162 [preauth]
Jan 31 06:55:13 compute-1 python3.9[101803]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:13 compute-1 sudo[101801]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:13.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:14 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:55:14 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 06:55:14 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:55:14 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 06:55:14 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 06:55:14 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:55:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:14.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:14 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:55:14 compute-1 sshd-session[101419]: Connection closed by 192.168.122.30 port 48346
Jan 31 06:55:14 compute-1 sshd-session[101416]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:55:14 compute-1 systemd[1]: session-38.scope: Deactivated successfully.
Jan 31 06:55:14 compute-1 systemd[1]: session-38.scope: Consumed 1.244s CPU time.
Jan 31 06:55:14 compute-1 systemd-logind[788]: Session 38 logged out. Waiting for processes to exit.
Jan 31 06:55:14 compute-1 systemd-logind[788]: Removed session 38.
Jan 31 06:55:15 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:15 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:55:15 compute-1 ceph-mon[81728]: pgmap v334: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:15.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:16 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:16.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:17 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:17 compute-1 ceph-mon[81728]: pgmap v335: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:17.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:18.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:18 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:19 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:55:19 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:19 compute-1 ceph-mon[81728]: pgmap v336: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:19 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:55:19 compute-1 sshd-session[101829]: Accepted publickey for zuul from 192.168.122.30 port 48732 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:55:19 compute-1 systemd-logind[788]: New session 39 of user zuul.
Jan 31 06:55:19 compute-1 systemd[1]: Started Session 39 of User zuul.
Jan 31 06:55:19 compute-1 sshd-session[101829]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:55:19 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 31 06:55:19 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 31 06:55:19 compute-1 sudo[101885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:55:19 compute-1 sudo[101885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:55:19 compute-1 sudo[101885]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:19 compute-1 sudo[101910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:55:19 compute-1 sudo[101910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:55:19 compute-1 sudo[101910]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:19.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:20.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:20 compute-1 python3.9[102032]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:55:20 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:20 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:55:20 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:55:20 compute-1 ceph-mon[81728]: 9.a scrub starts
Jan 31 06:55:20 compute-1 ceph-mon[81728]: 9.a scrub ok
Jan 31 06:55:20 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:20 compute-1 ceph-mon[81728]: 8.1f deep-scrub starts
Jan 31 06:55:20 compute-1 ceph-mon[81728]: 8.1f deep-scrub ok
Jan 31 06:55:20 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.d deep-scrub starts
Jan 31 06:55:20 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.d deep-scrub ok
Jan 31 06:55:21 compute-1 sudo[102186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxfbxagypqucvumemipzeehujqjgpzfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842520.6979394-60-105109308655478/AnsiballZ_file.py'
Jan 31 06:55:21 compute-1 sudo[102186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:21 compute-1 ceph-mon[81728]: pgmap v337: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:21 compute-1 ceph-mon[81728]: 9.d deep-scrub starts
Jan 31 06:55:21 compute-1 ceph-mon[81728]: 9.d deep-scrub ok
Jan 31 06:55:21 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:21 compute-1 python3.9[102188]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:21 compute-1 sudo[102186]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:21.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:21 compute-1 sudo[102361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqfudtefxymsfiefnieoxxvifdvbfhlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842521.6255736-84-121528644630449/AnsiballZ_stat.py'
Jan 31 06:55:21 compute-1 sudo[102361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:22.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:22 compute-1 python3.9[102363]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:55:22 compute-1 sudo[102361]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:22 compute-1 sudo[102439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzzbzvfgsldrdtxdpvhsurkzdsidltap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842521.6255736-84-121528644630449/AnsiballZ_file.py'
Jan 31 06:55:22 compute-1 sudo[102439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:22 compute-1 python3.9[102441]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.2o4b5euk recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:22 compute-1 sudo[102439]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:22 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 31 06:55:22 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 31 06:55:23 compute-1 sudo[102591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyslbvznqerjbwccolzswmwhyzpmdagy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842523.0902731-144-80075878617763/AnsiballZ_stat.py'
Jan 31 06:55:23 compute-1 sudo[102591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:23 compute-1 ceph-mon[81728]: pgmap v338: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:23 compute-1 ceph-mon[81728]: 9.f scrub starts
Jan 31 06:55:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:23 compute-1 python3.9[102593]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:55:23 compute-1 sudo[102591]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:23 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 31 06:55:23 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 31 06:55:23 compute-1 sudo[102669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvhfqzdlfiptllaflwqdnqqvvujbvfcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842523.0902731-144-80075878617763/AnsiballZ_file.py'
Jan 31 06:55:23 compute-1 sudo[102669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:23.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:23 compute-1 python3.9[102671]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.38095_ls recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:24 compute-1 sudo[102669]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:24.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:24 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:55:24 compute-1 sudo[102821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czecfaiehlptqpiximpkqbjsttntrntr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842524.361426-183-260729584081417/AnsiballZ_file.py'
Jan 31 06:55:24 compute-1 sudo[102821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:24 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.11 deep-scrub starts
Jan 31 06:55:24 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.11 deep-scrub ok
Jan 31 06:55:24 compute-1 python3.9[102823]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:55:24 compute-1 sudo[102821]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:24 compute-1 ceph-mon[81728]: 9.f scrub ok
Jan 31 06:55:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:24 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:55:24 compute-1 ceph-mon[81728]: pgmap v339: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:25 compute-1 sudo[102973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etgskaznmqjwpcjknylxcxjpkijsyrsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842524.963847-207-273338761839118/AnsiballZ_stat.py'
Jan 31 06:55:25 compute-1 sudo[102973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:25 compute-1 python3.9[102975]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:55:25 compute-1 sudo[102973]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:25 compute-1 sudo[103051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brctzvexghaoggvculmuqpfhlcujxwxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842524.963847-207-273338761839118/AnsiballZ_file.py'
Jan 31 06:55:25 compute-1 sudo[103051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:25 compute-1 python3.9[103053]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:55:25 compute-1 sudo[103051]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:25 compute-1 ceph-mon[81728]: 9.10 scrub starts
Jan 31 06:55:25 compute-1 ceph-mon[81728]: 9.10 scrub ok
Jan 31 06:55:25 compute-1 ceph-mon[81728]: 9.11 deep-scrub starts
Jan 31 06:55:25 compute-1 ceph-mon[81728]: 9.11 deep-scrub ok
Jan 31 06:55:25 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:25.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:55:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:26.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:55:26 compute-1 sudo[103203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-halyzqbprhmqazplmmqwyotjsjdcocyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842525.979517-207-138833862267151/AnsiballZ_stat.py'
Jan 31 06:55:26 compute-1 sudo[103203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:26 compute-1 python3.9[103205]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:55:26 compute-1 sudo[103203]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:26 compute-1 sudo[103281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfendlvvndryoomandwfivdcnysfinpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842525.979517-207-138833862267151/AnsiballZ_file.py'
Jan 31 06:55:26 compute-1 sudo[103281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:26 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 31 06:55:26 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 31 06:55:26 compute-1 python3.9[103283]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:55:26 compute-1 sudo[103281]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:27 compute-1 sudo[103433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etlprfzggcylnvwiuyhiulzddiiwglza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842527.2074714-276-212767654827466/AnsiballZ_file.py'
Jan 31 06:55:27 compute-1 sudo[103433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:55:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:27.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:55:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:55:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:28.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:55:29 compute-1 python3.9[103435]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:29 compute-1 sudo[103433]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:29 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:29 compute-1 ceph-mon[81728]: pgmap v340: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:29 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:55:29 compute-1 sudo[103585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ishzkdtalgwfgvvmuziixyxhwtjxjhaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842529.306306-300-81067866476984/AnsiballZ_stat.py'
Jan 31 06:55:29 compute-1 sudo[103585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:29 compute-1 python3.9[103587]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:55:29 compute-1 sudo[103585]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:29.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:30 compute-1 sudo[103663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctyliqvwbfwreotelttkhoupqfnncojh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842529.306306-300-81067866476984/AnsiballZ_file.py'
Jan 31 06:55:30 compute-1 sudo[103663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:30.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:30 compute-1 ceph-mon[81728]: 9.12 scrub starts
Jan 31 06:55:30 compute-1 ceph-mon[81728]: 9.12 scrub ok
Jan 31 06:55:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:30 compute-1 ceph-mon[81728]: pgmap v341: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:30 compute-1 ceph-mon[81728]: 8.1c scrub starts
Jan 31 06:55:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:30 compute-1 ceph-mon[81728]: 8.1c scrub ok
Jan 31 06:55:30 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:55:30 compute-1 python3.9[103665]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:30 compute-1 sudo[103663]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:30 compute-1 sudo[103815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzkneyqlylgnzjxnzychnolsbdqwwiua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842530.489186-336-271612429096211/AnsiballZ_stat.py'
Jan 31 06:55:30 compute-1 sudo[103815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:30 compute-1 python3.9[103817]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:55:30 compute-1 sudo[103815]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:31 compute-1 sudo[103893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riczhlufupzvcshlcdgmaeottwtsudun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842530.489186-336-271612429096211/AnsiballZ_file.py'
Jan 31 06:55:31 compute-1 sudo[103893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:31 compute-1 python3.9[103895]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:31 compute-1 sudo[103893]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:31 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:31 compute-1 ceph-mon[81728]: pgmap v342: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:55:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:31.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:55:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:32.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:32 compute-1 sudo[104045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbirgxnvkoqueoqimorurikzvoksvlqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842531.5469885-372-63745244037343/AnsiballZ_systemd.py'
Jan 31 06:55:32 compute-1 sudo[104045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:32 compute-1 python3.9[104047]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:55:32 compute-1 systemd[1]: Reloading.
Jan 31 06:55:32 compute-1 systemd-sysv-generator[104078]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:55:32 compute-1 systemd-rc-local-generator[104073]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:55:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:32 compute-1 ceph-mon[81728]: pgmap v343: 321 pgs: 1 active+clean+laggy, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:32 compute-1 sudo[104045]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:32 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.15 deep-scrub starts
Jan 31 06:55:32 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.15 deep-scrub ok
Jan 31 06:55:33 compute-1 sudo[104235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txdhtlnjlnkbcneeqjnircjaotnqfhyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842533.0151954-396-254021230991634/AnsiballZ_stat.py'
Jan 31 06:55:33 compute-1 sudo[104235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:33 compute-1 python3.9[104237]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:55:33 compute-1 sudo[104235]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:33 compute-1 sudo[104313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qasnlhwgaogqnnollewiikmutfvvzcwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842533.0151954-396-254021230991634/AnsiballZ_file.py'
Jan 31 06:55:33 compute-1 sudo[104313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:33 compute-1 python3.9[104315]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:33 compute-1 sudo[104313]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:55:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:33.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:55:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:34.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:34 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:55:34 compute-1 sudo[104465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkjcmqacnadflgmustghzrhwfqkrkzts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842534.4901366-432-60294181193428/AnsiballZ_stat.py'
Jan 31 06:55:34 compute-1 sudo[104465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:34 compute-1 python3.9[104467]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:55:35 compute-1 sudo[104465]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:35 compute-1 sudo[104543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omivjiffnsjlwzgihoagibohnbogubgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842534.4901366-432-60294181193428/AnsiballZ_file.py'
Jan 31 06:55:35 compute-1 sudo[104543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:35 compute-1 python3.9[104545]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:35 compute-1 sudo[104543]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:35 compute-1 sudo[104695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qangyqywlkfppiialownolshwumtdzso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842535.6800914-468-30583382292488/AnsiballZ_systemd.py'
Jan 31 06:55:35 compute-1 sudo[104695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:35.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:36.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:36 compute-1 python3.9[104697]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:55:36 compute-1 systemd[1]: Reloading.
Jan 31 06:55:36 compute-1 systemd-sysv-generator[104724]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:55:36 compute-1 systemd-rc-local-generator[104720]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:55:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:37.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:38.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:38 compute-1 ceph-mds[84120]: mds.beacon.cephfs.compute-1.hhzmle missed beacon ack from the monitors
Jan 31 06:55:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 06:55:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:40.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 06:55:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:55:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:40.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:55:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:42.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:42.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:42 compute-1 ceph-mds[84120]: mds.beacon.cephfs.compute-1.hhzmle missed beacon ack from the monitors
Jan 31 06:55:43 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:55:43 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).paxos(paxos updating c 252..990) lease_timeout -- calling new election
Jan 31 06:55:43 compute-1 ceph-mon[81728]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Jan 31 06:55:43 compute-1 ceph-mon[81728]: paxos.2).electionLogic(14) init, last seen epoch 14
Jan 31 06:55:43 compute-1 ceph-mon[81728]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 06:55:43 compute-1 ceph-mon[81728]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 06:55:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:44.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:44 compute-1 systemd[1]: Starting Create netns directory...
Jan 31 06:55:44 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 06:55:44 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 06:55:44 compute-1 systemd[1]: Finished Create netns directory.
Jan 31 06:55:44 compute-1 sudo[104695]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:44.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:45 compute-1 python3.9[104890]: ansible-ansible.builtin.service_facts Invoked
Jan 31 06:55:45 compute-1 network[104907]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 06:55:45 compute-1 network[104908]: 'network-scripts' will be removed from distribution in near future.
Jan 31 06:55:45 compute-1 network[104909]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 06:55:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:55:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:46.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:55:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:55:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:46.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:55:46 compute-1 ceph-mds[84120]: mds.beacon.cephfs.compute-1.hhzmle missed beacon ack from the monitors
Jan 31 06:55:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:48.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:48.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:48 compute-1 sudo[105169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bligaydhzcxhwipfasnmiisktmaqatmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842547.9317603-546-33849224284406/AnsiballZ_stat.py'
Jan 31 06:55:48 compute-1 sudo[105169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:48 compute-1 python3.9[105171]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:55:48 compute-1 sudo[105169]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:48 compute-1 sudo[105247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggdxnwpeckfqjszmgqmftnbemndtrvib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842547.9317603-546-33849224284406/AnsiballZ_file.py'
Jan 31 06:55:48 compute-1 sudo[105247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:48 compute-1 python3.9[105249]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:48 compute-1 sudo[105247]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:49 compute-1 ceph-mon[81728]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Jan 31 06:55:49 compute-1 ceph-mon[81728]: paxos.2).electionLogic(18) init, last seen epoch 18
Jan 31 06:55:49 compute-1 ceph-mon[81728]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 06:55:49 compute-1 ceph-mon[81728]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 06:55:49 compute-1 sudo[105399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjqgdiwlvcnybyscrsjsogdtaqnpvlmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842549.143247-585-262365426997644/AnsiballZ_file.py'
Jan 31 06:55:49 compute-1 sudo[105399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:49 compute-1 ceph-mon[81728]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 06:55:49 compute-1 ceph-mon[81728]: 9.15 deep-scrub starts
Jan 31 06:55:49 compute-1 ceph-mon[81728]: 9.15 deep-scrub ok
Jan 31 06:55:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:49 compute-1 ceph-mon[81728]: 8.6 scrub starts
Jan 31 06:55:49 compute-1 ceph-mon[81728]: 8.6 scrub ok
Jan 31 06:55:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:49 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:55:49 compute-1 ceph-mon[81728]: pgmap v344: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:49 compute-1 python3.9[105401]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:49 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 06:55:49 compute-1 sudo[105399]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:50.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:50 compute-1 sudo[105551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzwmggsqauvpbkbxwjnbuwqxvjeoyczn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842549.8258896-609-152808972902057/AnsiballZ_stat.py'
Jan 31 06:55:50 compute-1 sudo[105551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:55:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:50.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:55:50 compute-1 python3.9[105553]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:55:50 compute-1 sudo[105551]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:50 compute-1 sudo[105629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npydnbxdyrhlkarcnkkkkvzwulletejw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842549.8258896-609-152808972902057/AnsiballZ_file.py'
Jan 31 06:55:50 compute-1 sudo[105629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 9.13 scrub starts
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 9.13 scrub ok
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 9.b scrub starts
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 9.b scrub ok
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 9.5 scrub starts
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:50 compute-1 ceph-mon[81728]: mon.compute-1 calling monitor election
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:50 compute-1 ceph-mon[81728]: mon.compute-2 calling monitor election
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 9.5 scrub ok
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 9.17 scrub starts
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 9.17 scrub ok
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:50 compute-1 ceph-mon[81728]: mon.compute-1 calling monitor election
Jan 31 06:55:50 compute-1 ceph-mon[81728]: mon.compute-2 calling monitor election
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:50 compute-1 ceph-mon[81728]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 06:55:50 compute-1 ceph-mon[81728]: fsmap cephfs:1 {0=cephfs.compute-2.wcykmw=up:active} 2 up:standby
Jan 31 06:55:50 compute-1 ceph-mon[81728]: osdmap e118: 3 total, 3 up, 3 in
Jan 31 06:55:50 compute-1 ceph-mon[81728]: mgrmap e11: compute-0.gghdjs(active, since 8m), standbys: compute-2.iujpur, compute-1.hglnzn
Jan 31 06:55:50 compute-1 ceph-mon[81728]: Health check failed: 1/3 mons down, quorum compute-0,compute-1 (MON_DOWN)
Jan 31 06:55:50 compute-1 ceph-mon[81728]: Health detail: HEALTH_WARN 1 slow ops, oldest one blocked for 278 sec, osd.2 has slow ops
Jan 31 06:55:50 compute-1 ceph-mon[81728]: [WRN] SLOW_OPS: 1 slow ops, oldest one blocked for 278 sec, osd.2 has slow ops
Jan 31 06:55:50 compute-1 ceph-mon[81728]: mon.compute-0 calling monitor election
Jan 31 06:55:50 compute-1 ceph-mon[81728]: mon.compute-0 calling monitor election
Jan 31 06:55:50 compute-1 ceph-mon[81728]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 06:55:50 compute-1 ceph-mon[81728]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 06:55:50 compute-1 ceph-mon[81728]: fsmap cephfs:1 {0=cephfs.compute-2.wcykmw=up:active} 2 up:standby
Jan 31 06:55:50 compute-1 ceph-mon[81728]: osdmap e118: 3 total, 3 up, 3 in
Jan 31 06:55:50 compute-1 ceph-mon[81728]: mgrmap e11: compute-0.gghdjs(active, since 8m), standbys: compute-2.iujpur, compute-1.hglnzn
Jan 31 06:55:50 compute-1 ceph-mon[81728]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-1)
Jan 31 06:55:50 compute-1 ceph-mon[81728]: Health detail: HEALTH_WARN 1 slow ops, oldest one blocked for 278 sec, osd.2 has slow ops
Jan 31 06:55:50 compute-1 ceph-mon[81728]: [WRN] SLOW_OPS: 1 slow ops, oldest one blocked for 278 sec, osd.2 has slow ops
Jan 31 06:55:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:50 compute-1 ceph-mon[81728]: pgmap v352: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:50 compute-1 python3.9[105631]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:50 compute-1 sudo[105629]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:51 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:55:51 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:51 compute-1 sudo[105781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enkltlrtplpbluaodnjlayexhabciecb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842551.3443043-654-252027609137623/AnsiballZ_timezone.py'
Jan 31 06:55:51 compute-1 sudo[105781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:51 compute-1 python3.9[105783]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 06:55:51 compute-1 systemd[1]: Starting Time & Date Service...
Jan 31 06:55:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:52.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:52 compute-1 systemd[1]: Started Time & Date Service.
Jan 31 06:55:52 compute-1 sudo[105781]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:52.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:52 compute-1 sudo[105937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yadexpjpkeascitvejmfaazvzqrhuuns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842552.3384995-681-52011847206437/AnsiballZ_file.py'
Jan 31 06:55:52 compute-1 sudo[105937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:52 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:52 compute-1 ceph-mon[81728]: pgmap v353: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:52 compute-1 python3.9[105939]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:52 compute-1 sudo[105937]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:53 compute-1 sudo[106089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgchegriwgmdvgljwrdnwcblyptklyqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842553.004625-705-100555941397678/AnsiballZ_stat.py'
Jan 31 06:55:53 compute-1 sudo[106089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:53 compute-1 python3.9[106091]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:55:53 compute-1 sudo[106089]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:53 compute-1 sudo[106167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azyfctyueexitatewtwqggktqtvnxekm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842553.004625-705-100555941397678/AnsiballZ_file.py'
Jan 31 06:55:53 compute-1 sudo[106167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:53 compute-1 ceph-mon[81728]: 9.3 scrub starts
Jan 31 06:55:53 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:53 compute-1 ceph-mon[81728]: 9.3 scrub ok
Jan 31 06:55:53 compute-1 python3.9[106169]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:53 compute-1 sudo[106167]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:53 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:55:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:55:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:54.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:55:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:55:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:54.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:55:54 compute-1 sudo[106319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxzpwrmjhtbluxdvparxwznwvxohjpes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842554.0617523-741-243641340530075/AnsiballZ_stat.py'
Jan 31 06:55:54 compute-1 sudo[106319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:54 compute-1 python3.9[106321]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:55:54 compute-1 sudo[106319]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:54 compute-1 sudo[106397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trutjgwbkckgdojmjouiwipszlglkfnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842554.0617523-741-243641340530075/AnsiballZ_file.py'
Jan 31 06:55:54 compute-1 sudo[106397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:54 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:54 compute-1 ceph-mon[81728]: pgmap v354: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:54 compute-1 python3.9[106399]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.8jngrmp6 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:54 compute-1 sudo[106397]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:55 compute-1 sudo[106549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvlzhhbuebmagmsporxnywiqakzflaoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842555.198201-777-53792460869985/AnsiballZ_stat.py'
Jan 31 06:55:55 compute-1 sudo[106549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:55 compute-1 python3.9[106551]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:55:55 compute-1 sudo[106549]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:55 compute-1 sudo[106627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibnmmxclygsvcwbeiihwevtynurnlmxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842555.198201-777-53792460869985/AnsiballZ_file.py'
Jan 31 06:55:55 compute-1 sudo[106627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:55 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Jan 31 06:55:55 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:55:55.893600) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 06:55:55 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Jan 31 06:55:55 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842555893832, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2780, "num_deletes": 251, "total_data_size": 5201761, "memory_usage": 5270344, "flush_reason": "Manual Compaction"}
Jan 31 06:55:55 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Jan 31 06:55:55 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842555948285, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3370574, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7142, "largest_seqno": 9916, "table_properties": {"data_size": 3359726, "index_size": 6254, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 31717, "raw_average_key_size": 22, "raw_value_size": 3334012, "raw_average_value_size": 2374, "num_data_blocks": 277, "num_entries": 1404, "num_filter_entries": 1404, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842377, "oldest_key_time": 1769842377, "file_creation_time": 1769842555, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:55:55 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 54746 microseconds, and 4902 cpu microseconds.
Jan 31 06:55:55 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:55:55 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:55:55.948345) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3370574 bytes OK
Jan 31 06:55:55 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:55:55.948367) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Jan 31 06:55:55 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:55:55.958392) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Jan 31 06:55:55 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:55:55.958476) EVENT_LOG_v1 {"time_micros": 1769842555958455, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 06:55:55 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:55:55.958528) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 06:55:55 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 5188398, prev total WAL file size 5188398, number of live WAL files 2.
Jan 31 06:55:55 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:55:55 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:55:55.961263) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 31 06:55:55 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 06:55:55 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3291KB)], [15(7789KB)]
Jan 31 06:55:55 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842555961353, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 11346619, "oldest_snapshot_seqno": -1}
Jan 31 06:55:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:55:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:56.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:55:56 compute-1 python3.9[106629]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:56 compute-1 sudo[106627]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:56 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 4189 keys, 9698589 bytes, temperature: kUnknown
Jan 31 06:55:56 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842556133046, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 9698589, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9665362, "index_size": 21661, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10501, "raw_key_size": 102959, "raw_average_key_size": 24, "raw_value_size": 9584273, "raw_average_value_size": 2287, "num_data_blocks": 945, "num_entries": 4189, "num_filter_entries": 4189, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842193, "oldest_key_time": 0, "file_creation_time": 1769842555, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:55:56 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:55:56 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:55:56.133277) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9698589 bytes
Jan 31 06:55:56 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:55:56.161473) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 66.1 rd, 56.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.6 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(6.2) write-amplify(2.9) OK, records in: 4716, records dropped: 527 output_compression: NoCompression
Jan 31 06:55:56 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:55:56.161530) EVENT_LOG_v1 {"time_micros": 1769842556161510, "job": 6, "event": "compaction_finished", "compaction_time_micros": 171755, "compaction_time_cpu_micros": 17329, "output_level": 6, "num_output_files": 1, "total_output_size": 9698589, "num_input_records": 4716, "num_output_records": 4189, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 06:55:56 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:55:56 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842556162022, "job": 6, "event": "table_file_deletion", "file_number": 17}
Jan 31 06:55:56 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:55:56 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842556162829, "job": 6, "event": "table_file_deletion", "file_number": 15}
Jan 31 06:55:56 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:55:55.961174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:55:56 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:55:56.162881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:55:56 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:55:56.162886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:55:56 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:55:56.162888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:55:56 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:55:56.162890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:55:56 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:55:56.162891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:55:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:55:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:56.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:55:56 compute-1 sudo[106779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggkuwozpkhodfavzgwzcdajymenjplao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842556.3945198-816-268314988491438/AnsiballZ_command.py'
Jan 31 06:55:56 compute-1 sudo[106779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:56 compute-1 python3.9[106781]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:55:57 compute-1 sudo[106779]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:57 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:57 compute-1 ceph-mon[81728]: pgmap v355: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:55:57 compute-1 sudo[106932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyyzajyxgjtxlodfuazlbuqqkqvtkdtt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769842557.2561524-840-227401973432267/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 06:55:57 compute-1 sudo[106932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:57 compute-1 python3[106934]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 06:55:57 compute-1 sudo[106932]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:55:58.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:55:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:55:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:55:58.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:55:58 compute-1 ceph-mon[81728]: 9.7 deep-scrub starts
Jan 31 06:55:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:58 compute-1 ceph-mon[81728]: 9.7 deep-scrub ok
Jan 31 06:55:58 compute-1 sudo[107084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owckoymotxvxhnjsnszyvxchjqmbtxtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842558.0807219-864-201778210733941/AnsiballZ_stat.py'
Jan 31 06:55:58 compute-1 sudo[107084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:58 compute-1 python3.9[107086]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:55:58 compute-1 sudo[107084]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:58 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:55:59 compute-1 sudo[107162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksajedjlnrqqxqyjupugiwajyahksvxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842558.0807219-864-201778210733941/AnsiballZ_file.py'
Jan 31 06:55:59 compute-1 sudo[107162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:55:59 compute-1 python3.9[107164]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:55:59 compute-1 sudo[107162]: pam_unix(sudo:session): session closed for user root
Jan 31 06:55:59 compute-1 ceph-mon[81728]: 9.18 deep-scrub starts
Jan 31 06:55:59 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:55:59 compute-1 ceph-mon[81728]: 9.18 deep-scrub ok
Jan 31 06:55:59 compute-1 ceph-mon[81728]: pgmap v356: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:00.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:00.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:00 compute-1 ceph-mon[81728]: 9.8 scrub starts
Jan 31 06:56:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:00 compute-1 ceph-mon[81728]: 9.8 scrub ok
Jan 31 06:56:00 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:56:00 compute-1 ceph-mon[81728]: 9.9 scrub starts
Jan 31 06:56:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:00 compute-1 ceph-mon[81728]: 9.9 scrub ok
Jan 31 06:56:01 compute-1 ceph-mon[81728]: pgmap v357: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:01 compute-1 ceph-mon[81728]: 9.16 deep-scrub starts
Jan 31 06:56:01 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:01 compute-1 ceph-mon[81728]: 9.16 deep-scrub ok
Jan 31 06:56:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:02.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:02.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:02 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:02 compute-1 ceph-mon[81728]: pgmap v358: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:03 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:03 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:56:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:56:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:04.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:56:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:04.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:04 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:04 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:56:04 compute-1 ceph-mon[81728]: pgmap v359: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:04 compute-1 sudo[107314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aejubfokdechiavnfqqbmohpcdpmarah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842564.5975864-900-216826755636574/AnsiballZ_stat.py'
Jan 31 06:56:04 compute-1 sudo[107314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:05 compute-1 python3.9[107316]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:56:05 compute-1 sudo[107314]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:05 compute-1 sudo[107439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwxgzakrnchibupqgjycsyeqdchwbyky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842564.5975864-900-216826755636574/AnsiballZ_copy.py'
Jan 31 06:56:05 compute-1 sudo[107439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:05 compute-1 python3.9[107441]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842564.5975864-900-216826755636574/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:56:05 compute-1 sudo[107439]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:56:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:06.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:56:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:06.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:06 compute-1 sudo[107591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tirohropuvadujztxldxpnquruqgegmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842566.025151-945-47429158789928/AnsiballZ_stat.py'
Jan 31 06:56:06 compute-1 sudo[107591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:06 compute-1 python3.9[107593]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:56:06 compute-1 sudo[107591]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:06 compute-1 sudo[107669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oojqtyxjjlnvjhnnyloctpfxodidbssk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842566.025151-945-47429158789928/AnsiballZ_file.py'
Jan 31 06:56:06 compute-1 sudo[107669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:06 compute-1 ceph-mon[81728]: 9.1d deep-scrub starts
Jan 31 06:56:06 compute-1 ceph-mon[81728]: 9.1d deep-scrub ok
Jan 31 06:56:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:06 compute-1 python3.9[107671]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:56:06 compute-1 sudo[107669]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:07 compute-1 sudo[107821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiaiowcwmfgvabjqwualrzpsptgpxctw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842567.310636-981-60994832938987/AnsiballZ_stat.py'
Jan 31 06:56:07 compute-1 sudo[107821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:07 compute-1 ceph-mon[81728]: pgmap v360: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:07 compute-1 python3.9[107823]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:56:07 compute-1 sudo[107821]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:08.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:56:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:08.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:56:08 compute-1 sudo[107899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqjlwstppwkyikddsdppnhyjoaofqspx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842567.310636-981-60994832938987/AnsiballZ_file.py'
Jan 31 06:56:08 compute-1 sudo[107899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:08 compute-1 python3.9[107901]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:56:08 compute-1 sudo[107899]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:08 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:08 compute-1 ceph-mon[81728]: pgmap v361: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:08 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:56:09 compute-1 sudo[108051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwzettgcqcrvxtrlijeekmeqigwobzwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842568.7616205-1017-96436930178067/AnsiballZ_stat.py'
Jan 31 06:56:09 compute-1 sudo[108051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:09 compute-1 python3.9[108053]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:56:09 compute-1 sudo[108051]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:09 compute-1 sudo[108129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owxrmzxfbaastwmiochnfbjawrizjynw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842568.7616205-1017-96436930178067/AnsiballZ_file.py'
Jan 31 06:56:09 compute-1 sudo[108129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:09 compute-1 python3.9[108131]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:56:09 compute-1 sudo[108129]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:09 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:09 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:56:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:10.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:56:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:10.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:56:10 compute-1 sudo[108281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdtmylwhqxhkykmxboiqtxmkkrsbsnzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842570.087244-1056-120319928775028/AnsiballZ_command.py'
Jan 31 06:56:10 compute-1 sudo[108281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:10 compute-1 python3.9[108283]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:56:10 compute-1 sudo[108281]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:11 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:11 compute-1 ceph-mon[81728]: pgmap v362: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:11 compute-1 sudo[108436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aowagsydmcmtxcdrfnwnwbblelwukgnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842570.7359862-1080-32670586370711/AnsiballZ_blockinfile.py'
Jan 31 06:56:11 compute-1 sudo[108436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:11 compute-1 python3.9[108438]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:56:11 compute-1 sudo[108436]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:11 compute-1 sudo[108588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bburrtcjssexbqlsuzpywqhfgnzbnyus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842571.7645416-1107-181104623558614/AnsiballZ_file.py'
Jan 31 06:56:11 compute-1 sudo[108588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:12.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:12 compute-1 python3.9[108590]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:56:12 compute-1 sudo[108588]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:12.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:12 compute-1 sudo[108740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qletbblqrlbffkmgsrymoewuueojcfpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842572.2643685-1107-80525851249933/AnsiballZ_file.py'
Jan 31 06:56:12 compute-1 sudo[108740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:12 compute-1 python3.9[108742]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:56:12 compute-1 sudo[108740]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:13 compute-1 ceph-mon[81728]: pgmap v363: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:13 compute-1 sudo[108892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ginicplqtrcshfaddwltnkgktknjjups ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842573.0006473-1152-94514486101395/AnsiballZ_mount.py'
Jan 31 06:56:13 compute-1 sudo[108892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:13 compute-1 python3.9[108894]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 06:56:13 compute-1 sudo[108892]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:13 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:56:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:14.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:14 compute-1 sudo[109044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnanganrqesavwnolvazeoodvkhmcqsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842573.8508432-1152-62403111535281/AnsiballZ_mount.py'
Jan 31 06:56:14 compute-1 sudo[109044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:14.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:14 compute-1 python3.9[109046]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 06:56:14 compute-1 sudo[109044]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:15 compute-1 sshd-session[101832]: Connection closed by 192.168.122.30 port 48732
Jan 31 06:56:15 compute-1 sshd-session[101829]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:56:15 compute-1 systemd[1]: session-39.scope: Deactivated successfully.
Jan 31 06:56:15 compute-1 systemd[1]: session-39.scope: Consumed 22.889s CPU time.
Jan 31 06:56:15 compute-1 systemd-logind[788]: Session 39 logged out. Waiting for processes to exit.
Jan 31 06:56:15 compute-1 systemd-logind[788]: Removed session 39.
Jan 31 06:56:15 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:15 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:56:15 compute-1 ceph-mon[81728]: pgmap v364: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:15 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:56:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:16.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:56:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:16.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:16 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:16 compute-1 ceph-mon[81728]: pgmap v365: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:17 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:18.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:56:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:18.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:56:18 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:56:18 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:18 compute-1 ceph-mon[81728]: pgmap v366: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:19 compute-1 sudo[109071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:56:19 compute-1 sudo[109071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:56:19 compute-1 sudo[109071]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:19 compute-1 sudo[109096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:56:19 compute-1 sudo[109096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:56:19 compute-1 sudo[109096]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:19 compute-1 sudo[109121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:56:19 compute-1 sudo[109121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:56:19 compute-1 sudo[109121]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:19 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:19 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:56:19 compute-1 sudo[109146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 06:56:19 compute-1 sudo[109146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:56:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:20.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:56:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:20.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:56:20 compute-1 sudo[109146]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:20 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:20 compute-1 ceph-mon[81728]: pgmap v367: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:21 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:21 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:56:21 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:56:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:22.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:22 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 06:56:22 compute-1 sshd-session[109202]: Accepted publickey for zuul from 192.168.122.30 port 50536 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:56:22 compute-1 systemd-logind[788]: New session 40 of user zuul.
Jan 31 06:56:22 compute-1 systemd[1]: Started Session 40 of User zuul.
Jan 31 06:56:22 compute-1 sshd-session[109202]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:56:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:22.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:22 compute-1 sudo[109358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moexlnmbaoglpyjoshmeznpnzchcifuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842582.2304704-24-247735718473439/AnsiballZ_tempfile.py'
Jan 31 06:56:22 compute-1 sudo[109358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:22 compute-1 python3.9[109360]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 06:56:22 compute-1 sudo[109358]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:23 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:56:23 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 06:56:23 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:56:23 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 06:56:23 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 06:56:23 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:56:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:23 compute-1 ceph-mon[81728]: pgmap v368: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:23 compute-1 sudo[109510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghchhfjzgwhbenlpqhwgznahlohkwylc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842583.1170797-60-276259899684868/AnsiballZ_stat.py'
Jan 31 06:56:23 compute-1 sudo[109510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:23 compute-1 python3.9[109512]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:56:23 compute-1 sudo[109510]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:23 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:56:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:24.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:56:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:24.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:56:24 compute-1 sudo[109664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edomdxuhikfxyifhcwauxaxerwffulyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842583.906777-84-276781801408767/AnsiballZ_slurp.py'
Jan 31 06:56:24 compute-1 sudo[109664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:24 compute-1 python3.9[109666]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 31 06:56:24 compute-1 sudo[109664]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:25 compute-1 sudo[109816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whehnsfacnrrqdabtikghigoddvpzhis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842584.726746-109-144907461600079/AnsiballZ_stat.py'
Jan 31 06:56:25 compute-1 sudo[109816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:25 compute-1 python3.9[109818]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.d78grb58 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:56:25 compute-1 sudo[109816]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:25 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:56:25 compute-1 ceph-mon[81728]: pgmap v369: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:25 compute-1 sudo[109941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfatrvdmwbhltmvsddskbhhnhdtwhqrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842584.726746-109-144907461600079/AnsiballZ_copy.py'
Jan 31 06:56:25 compute-1 sudo[109941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:26 compute-1 python3.9[109943]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.d78grb58 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842584.726746-109-144907461600079/.source.d78grb58 _original_basename=.93tnjw09 follow=False checksum=fe1ebeeefeefbe8ea03479c835e7fd7974336244 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:56:26 compute-1 sudo[109941]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:56:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:26.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:56:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:26.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:26 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:26 compute-1 sudo[110093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egqschjrtqcixjgprprjpcygigrkcpuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842586.2619746-153-268066904812555/AnsiballZ_setup.py'
Jan 31 06:56:26 compute-1 sudo[110093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:27 compute-1 python3.9[110095]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:56:27 compute-1 sudo[110093]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:27 compute-1 ceph-mon[81728]: pgmap v370: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:27 compute-1 sudo[110245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iumhcvgxmlcznhfocqzekxwdvbvmxddz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842587.4387867-178-234277177830550/AnsiballZ_blockinfile.py'
Jan 31 06:56:27 compute-1 sudo[110245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:27 compute-1 sudo[110248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:56:27 compute-1 sudo[110248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:56:27 compute-1 sudo[110248]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:27 compute-1 sudo[110273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:56:27 compute-1 sudo[110273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:56:27 compute-1 sudo[110273]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:27 compute-1 python3.9[110247]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCux/eS/9tJWdvcz7CSqzbT3/CFFfMIoClo+OiLmW4DHDCsL7b4Sd8s4ZGetrM/b9d+nZhH3I0np2S0wkbf0kzxDpFnzV/CqSLPcHC1GFG8DlXIWkbbK3H9Nc+il8eG2rceqOXs5LCS6H6lOeSAynOJd7kkW0euL4YtQcqH6/PCpvaHnyAXOL9+76w6apGzrWBRGSKGvwJiCrundYhP4TjMSlb6ITyIdF0bE1617p7zZOh+CQt6wB17bBAKL/ZR7qQsjbIhW1zwJ7R0NuWJrgxemGImJ3YRN+2WJ5UpNJxoMPkwC67IfW4avOTykueyK9cACQ/OLPMvhxBVzsBBfmV7Xl5RquVXDj1OrXfG+zVu5YV0+GEtmxZhptXdzBvMkDBAr3hRB/jE/GZeCx/d6eoA3vfyT7tFrBaunMaiIutt/GbmQBhPSqSrqgau7M8rqs7ocyOCZI3ezwskVMxOX8yCOVAib7rHUkj+I+B48V/7MXiHOkBpOBUmgGSiM2whUe8=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJiG2htD5mCqa+IIAJsjOKgNJpPNmrlfh2g7QGI6KcQd
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG8QHiFr+d3LEQcNktaGAAZTvvRlNt/N3ZuLInnbRWqbA8w9jqUbMmg6m0Yc2Z+a+4iHrAMgRl5PGiHtvzbSe78=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVmjyOgMrBcNkKRe/3MkTqg/LhVt3sOvBD2IwLvjJmLe3cxmmFlu3iixT4LIzRscHQxUt6EqOuAiYL2BapPTTPjEaB+TseppBVXIPZfjllMgVy8pSqsZa+MUsbI4pONfcoart2REu5ObJIPOSl3YDAkGB+rxeAE1BD+sYmdlKriC/2JkUcS6p03QSjQnukMP476+uzXmPHLvm7A9TJjN2Oa4FkgJFI8+gFZaKPpHzCdoYD8COI0LYpp49uJ0gHQ7E4AepcpNUZXBgEsYKntsF9J/md1b13dW0ucGniV3eVxfWAH3xMRlwfFrT8TB+iQ74ghNmDEY/CCpZwkpL4W6bV7GT4+3nbvWIJv9/dgPSqeunTbbAWPEu6KM0nOuOGVRtQ6+q4aM3TRwV0DUvZptSGhRnHOekdOBRtiuMOnClub09PJMyOr4fKi3e59CfIx36NjxbNZfwA1j9jS3BDHL5BtATwiuTVMUWtdRYUT0h4zdmDtHkVnnPQBm2C3d7o/8c=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHThs9i/0cwyfrem5xVfEov0dwlVT7YQsUAzvhlKxVcU
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCPv7c3x32Z77V8zjbPteGtuwIl3HzfI8HP5le/fNUtef+zMbIe6oyaIlzMLTKYnfaTTkKeVwM+hyTawD64NkAc=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7oaNruBF82m85jI32p4Mj+yn4T3FBHQ7cMc6lELq3AspplPtBQsmBgDfhjfVg1I4+kEqlqvMmBXvkZu7SGFPiUPQlioc6MCfPrB8/wSLBG/pEWqlStSpdkbOBEEivzl5kpIYrbNpwH3q/sL6mbZB4fYlpLP6SY4uxDutOWZutUUlzDguTJUprXhv8BnwgqPoBM7wwuPY+U9PSdLY8pxG40xO+UQ9llhK0rTX9Io1k8OtlJeJu/zVCmcEIp7bMmk4GLYHzfhe1JW7+O8RnNxmyEbfEZpJRKD+squSzbEC4jYJSF2ZIG9++KZY33LUAy3Krn46o8Bo+vBJX3HRYdgtGaejzyYimDJ2OPL+UB5K9tTqqKbQlmhZODmFmTVgZabEHzHSuT+dTFBmmzW17ll4cWYHemkonjSM+nl3zO9Quwp+HRmkAa5/uJIFeVLZInx7/aeHCar427H5OnfpuSLc1X9uSNlPAvvIdlXagkfCOLBFXlBSPhkDBqBq9MX7u0ic=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ4MRNp0lqMmdnWHkBaN0bYiu3NyVZLTvXbzAb78HL/H
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKrqTuBK9SuQu9hS9hBIqRv9weMcR5IS3TOGti2Gz24hxwuCxS2PuVSyWVacVoXmRrXt6Nl3b5KRQ35C6gTvbIU=
                                              create=True mode=0644 path=/tmp/ansible.d78grb58 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:56:28 compute-1 sudo[110245]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:56:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:28.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:56:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:28.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:28 compute-1 sudo[110447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcouupqbqcydzxgjsyrjdopyzcbxiwos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842588.2131221-202-210039295025856/AnsiballZ_command.py'
Jan 31 06:56:28 compute-1 sudo[110447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:56:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:56:28 compute-1 ceph-mon[81728]: pgmap v371: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:28 compute-1 python3.9[110449]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.d78grb58' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:56:28 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:56:28 compute-1 sudo[110447]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:29 compute-1 sudo[110601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyzlvberlcfczhpxiqsbakrgqmttfctm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842589.2387214-226-109147803477695/AnsiballZ_file.py'
Jan 31 06:56:29 compute-1 sudo[110601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:29 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:29 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:56:29 compute-1 python3.9[110603]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.d78grb58 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:56:29 compute-1 sudo[110601]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:30.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:30.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:30 compute-1 sshd-session[109208]: Connection closed by 192.168.122.30 port 50536
Jan 31 06:56:30 compute-1 sshd-session[109202]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:56:30 compute-1 systemd[1]: session-40.scope: Deactivated successfully.
Jan 31 06:56:30 compute-1 systemd[1]: session-40.scope: Consumed 3.936s CPU time.
Jan 31 06:56:30 compute-1 systemd-logind[788]: Session 40 logged out. Waiting for processes to exit.
Jan 31 06:56:30 compute-1 systemd-logind[788]: Removed session 40.
Jan 31 06:56:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:30 compute-1 ceph-mon[81728]: pgmap v372: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:31 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:32.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:32.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:32 compute-1 ceph-mon[81728]: pgmap v373: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:33 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:56:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:34.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:34.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:34 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:56:34 compute-1 ceph-mon[81728]: pgmap v374: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:35 compute-1 sshd-session[110628]: Accepted publickey for zuul from 192.168.122.30 port 45556 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:56:35 compute-1 systemd-logind[788]: New session 41 of user zuul.
Jan 31 06:56:35 compute-1 systemd[1]: Started Session 41 of User zuul.
Jan 31 06:56:35 compute-1 sshd-session[110628]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:56:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:56:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:36.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:56:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:36.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:36 compute-1 python3.9[110781]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:56:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:36 compute-1 ceph-mon[81728]: pgmap v375: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:37 compute-1 sudo[110935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wieljxtqlkkfvomqgvzbyoontjxyhyxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842597.1818907-57-227303973186104/AnsiballZ_systemd.py'
Jan 31 06:56:37 compute-1 sudo[110935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:37 compute-1 python3.9[110937]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 06:56:37 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:38 compute-1 sudo[110935]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:38.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:38.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:38 compute-1 sudo[111089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcmrptzagrophwiedvjdqlczwpllqahb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842598.2332687-81-114824531320486/AnsiballZ_systemd.py'
Jan 31 06:56:38 compute-1 sudo[111089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:38 compute-1 python3.9[111091]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:56:38 compute-1 sudo[111089]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:38 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:56:38 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:38 compute-1 ceph-mon[81728]: pgmap v376: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:39 compute-1 sudo[111242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eubsdeehgjptetszsmfenqtcimadfkmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842599.243916-108-114616117051890/AnsiballZ_command.py'
Jan 31 06:56:39 compute-1 sudo[111242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:39 compute-1 python3.9[111244]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:56:39 compute-1 sudo[111242]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:39 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:39 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 339 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:56:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:40.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:40.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:40 compute-1 sudo[111395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjzwqqttnuzcelobisxhadbesloekbkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842599.9919283-132-184458768483519/AnsiballZ_stat.py'
Jan 31 06:56:40 compute-1 sudo[111395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:40 compute-1 python3.9[111397]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:56:40 compute-1 sudo[111395]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:41 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:41 compute-1 ceph-mon[81728]: pgmap v377: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:41 compute-1 sudo[111547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sspvkwqxnykpwdiuzaqwwzfrilachsgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842600.6951215-159-239540886023973/AnsiballZ_file.py'
Jan 31 06:56:41 compute-1 sudo[111547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:41 compute-1 python3.9[111549]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:56:41 compute-1 sudo[111547]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:41 compute-1 sshd-session[110631]: Connection closed by 192.168.122.30 port 45556
Jan 31 06:56:41 compute-1 sshd-session[110628]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:56:41 compute-1 systemd[1]: session-41.scope: Deactivated successfully.
Jan 31 06:56:41 compute-1 systemd[1]: session-41.scope: Consumed 3.123s CPU time.
Jan 31 06:56:41 compute-1 systemd-logind[788]: Session 41 logged out. Waiting for processes to exit.
Jan 31 06:56:41 compute-1 systemd-logind[788]: Removed session 41.
Jan 31 06:56:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:56:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:42.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:56:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:42.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:43 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:43 compute-1 ceph-mon[81728]: pgmap v378: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:43 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:56:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:56:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:44.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:56:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:44.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:45 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:45 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 344 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:56:45 compute-1 ceph-mon[81728]: pgmap v379: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:46.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:46.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:47 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:47 compute-1 ceph-mon[81728]: pgmap v380: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:47 compute-1 sshd-session[111574]: Accepted publickey for zuul from 192.168.122.30 port 47796 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:56:47 compute-1 systemd-logind[788]: New session 42 of user zuul.
Jan 31 06:56:47 compute-1 systemd[1]: Started Session 42 of User zuul.
Jan 31 06:56:47 compute-1 sshd-session[111574]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:56:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:48.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:48 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:48.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:48 compute-1 python3.9[111727]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:56:48 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:56:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:49 compute-1 ceph-mon[81728]: pgmap v381: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:49 compute-1 sudo[111881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnuanouxntzuaiwdkffkznkscqpwnoqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842609.1296697-63-154744112035968/AnsiballZ_setup.py'
Jan 31 06:56:49 compute-1 sudo[111881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:49 compute-1 python3.9[111883]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:56:49 compute-1 sudo[111881]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 06:56:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:50.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 06:56:50 compute-1 sudo[111965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdgcpowojzrrwhkuajmrnbzjizwqqeve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842609.1296697-63-154744112035968/AnsiballZ_dnf.py'
Jan 31 06:56:50 compute-1 sudo[111965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 06:56:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:50.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 06:56:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:50 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 349 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:56:50 compute-1 python3.9[111967]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 06:56:51 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:51 compute-1 ceph-mon[81728]: pgmap v382: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:51 compute-1 sudo[111965]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 06:56:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:52.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 06:56:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:52.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:52 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:52 compute-1 python3.9[112118]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:56:53 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:53 compute-1 ceph-mon[81728]: pgmap v383: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:53 compute-1 python3.9[112269]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 06:56:53 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:56:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 06:56:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:54.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 06:56:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:54.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:54 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:54 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:54 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 354 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:56:54 compute-1 python3.9[112419]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:56:54 compute-1 python3.9[112569]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:56:55 compute-1 ceph-mon[81728]: pgmap v384: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:55 compute-1 sshd-session[111577]: Connection closed by 192.168.122.30 port 47796
Jan 31 06:56:55 compute-1 sshd-session[111574]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:56:55 compute-1 systemd[1]: session-42.scope: Deactivated successfully.
Jan 31 06:56:55 compute-1 systemd[1]: session-42.scope: Consumed 5.054s CPU time.
Jan 31 06:56:55 compute-1 systemd-logind[788]: Session 42 logged out. Waiting for processes to exit.
Jan 31 06:56:55 compute-1 systemd-logind[788]: Removed session 42.
Jan 31 06:56:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:56.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 06:56:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:56.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 06:56:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:56 compute-1 ceph-mon[81728]: pgmap v385: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:57 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:56:58.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:56:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:56:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:56:58.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:56:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:58 compute-1 ceph-mon[81728]: pgmap v386: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:56:58 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:56:59 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:56:59 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 359 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:57:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:00.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:00.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:00 compute-1 ceph-mon[81728]: pgmap v387: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:01 compute-1 sshd-session[112594]: Accepted publickey for zuul from 192.168.122.30 port 45396 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:57:01 compute-1 systemd-logind[788]: New session 43 of user zuul.
Jan 31 06:57:01 compute-1 systemd[1]: Started Session 43 of User zuul.
Jan 31 06:57:01 compute-1 sshd-session[112594]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:57:01 compute-1 python3.9[112747]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:57:01 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:02.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:02.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:02 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:02 compute-1 ceph-mon[81728]: pgmap v388: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:03 compute-1 sudo[112901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eawygqqtsigzyxedphibtansvqncalfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842622.9991412-110-20890455610751/AnsiballZ_file.py'
Jan 31 06:57:03 compute-1 sudo[112901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:03 compute-1 python3.9[112903]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:57:03 compute-1 sudo[112901]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:03 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:57:03 compute-1 sudo[113053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovjnvyfubeubekvlmodmkckwknktzmdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842623.6829615-110-69172189073686/AnsiballZ_file.py'
Jan 31 06:57:03 compute-1 sudo[113053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:04 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:04 compute-1 python3.9[113055]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:57:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:04.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:04 compute-1 sudo[113053]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:04.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:04 compute-1 sudo[113205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naunzjpxythslxrfxvkplrkytguiizxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842624.3064497-157-143573491664056/AnsiballZ_stat.py'
Jan 31 06:57:04 compute-1 sudo[113205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:05 compute-1 python3.9[113207]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:57:05 compute-1 sudo[113205]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:05 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:05 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 364 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:57:05 compute-1 ceph-mon[81728]: pgmap v389: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:05 compute-1 sudo[113328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxtbvosshflyzxvwkiczxnemmcagdgcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842624.3064497-157-143573491664056/AnsiballZ_copy.py'
Jan 31 06:57:05 compute-1 sudo[113328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:05 compute-1 python3.9[113330]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842624.3064497-157-143573491664056/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=c538923a38b15751f3b9b715090ebe3c51a5c0b7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:57:05 compute-1 sudo[113328]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:05 compute-1 sudo[113480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrluguyfwrsnyavcseibsjydbjzwkqws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842625.754841-157-66378225758381/AnsiballZ_stat.py'
Jan 31 06:57:05 compute-1 sudo[113480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:06 compute-1 python3.9[113482]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:57:06 compute-1 sudo[113480]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:06.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:06.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:06 compute-1 sudo[113603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nymnymkpgbmlxztqfywsufhwgdeuiuau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842625.754841-157-66378225758381/AnsiballZ_copy.py'
Jan 31 06:57:06 compute-1 sudo[113603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:06 compute-1 python3.9[113605]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842625.754841-157-66378225758381/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=8029520a6e9bb0cd2e43949d17831c57eb8ef4f5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:57:06 compute-1 sudo[113603]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:06 compute-1 sudo[113755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxqektcasllatcnstftkrtjonviuhskp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842626.675946-157-93378846861055/AnsiballZ_stat.py'
Jan 31 06:57:06 compute-1 sudo[113755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:07 compute-1 python3.9[113757]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:57:07 compute-1 sudo[113755]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:07 compute-1 sudo[113878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtqctonpqgrnnwufucjstpdqhzqqfcyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842626.675946-157-93378846861055/AnsiballZ_copy.py'
Jan 31 06:57:07 compute-1 sudo[113878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:07 compute-1 ceph-mon[81728]: pgmap v390: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:07 compute-1 python3.9[113880]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842626.675946-157-93378846861055/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=17bc73a252b672f1d231ca17a03630381b0348be backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:57:07 compute-1 sudo[113878]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:07 compute-1 sudo[114030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcwghinfwkjwgkdkrsxaqyznjmdwpnyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842627.7071145-282-52393120487072/AnsiballZ_file.py'
Jan 31 06:57:07 compute-1 sudo[114030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:08 compute-1 python3.9[114032]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:57:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:08.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:08 compute-1 sudo[114030]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:08.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:08 compute-1 sudo[114182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frrhttzzlkxgulxyfjorelnijdruiixh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842628.255307-282-153515592665389/AnsiballZ_file.py'
Jan 31 06:57:08 compute-1 sudo[114182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:08 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:08 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:08 compute-1 python3.9[114184]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:57:08 compute-1 sudo[114182]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:08 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:57:09 compute-1 sudo[114334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hknrqunreyrriigjdpsrtauhzupbftfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842628.876426-329-183658558478034/AnsiballZ_stat.py'
Jan 31 06:57:09 compute-1 sudo[114334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:57:09.368921) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842629368953, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1224, "num_deletes": 252, "total_data_size": 2132087, "memory_usage": 2172856, "flush_reason": "Manual Compaction"}
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842629377729, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 920801, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9921, "largest_seqno": 11140, "table_properties": {"data_size": 916562, "index_size": 1635, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12629, "raw_average_key_size": 21, "raw_value_size": 906706, "raw_average_value_size": 1511, "num_data_blocks": 70, "num_entries": 600, "num_filter_entries": 600, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842556, "oldest_key_time": 1769842556, "file_creation_time": 1769842629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 8869 microseconds, and 2508 cpu microseconds.
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:57:09.377790) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 920801 bytes OK
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:57:09.377806) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:57:09.378915) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:57:09.378928) EVENT_LOG_v1 {"time_micros": 1769842629378924, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:57:09.378943) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 2126087, prev total WAL file size 2126087, number of live WAL files 2.
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:57:09.379308) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(899KB)], [18(9471KB)]
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842629379383, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 10619390, "oldest_snapshot_seqno": -1}
Jan 31 06:57:09 compute-1 python3.9[114336]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:57:09 compute-1 sudo[114334]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 4301 keys, 8066890 bytes, temperature: kUnknown
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842629451747, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 8066890, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8036008, "index_size": 18996, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 106281, "raw_average_key_size": 24, "raw_value_size": 7955883, "raw_average_value_size": 1849, "num_data_blocks": 825, "num_entries": 4301, "num_filter_entries": 4301, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842193, "oldest_key_time": 0, "file_creation_time": 1769842629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:57:09.452010) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 8066890 bytes
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:57:09.453164) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.6 rd, 111.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.2 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(20.3) write-amplify(8.8) OK, records in: 4789, records dropped: 488 output_compression: NoCompression
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:57:09.453185) EVENT_LOG_v1 {"time_micros": 1769842629453175, "job": 8, "event": "compaction_finished", "compaction_time_micros": 72439, "compaction_time_cpu_micros": 13666, "output_level": 6, "num_output_files": 1, "total_output_size": 8066890, "num_input_records": 4789, "num_output_records": 4301, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842629453388, "job": 8, "event": "table_file_deletion", "file_number": 20}
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842629454337, "job": 8, "event": "table_file_deletion", "file_number": 18}
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:57:09.379278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:57:09.454378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:57:09.454383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:57:09.454385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:57:09.454387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:57:09 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:57:09.454389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:57:09 compute-1 ceph-mon[81728]: pgmap v391: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:09 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:09 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 369 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:57:09 compute-1 sudo[114457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzlctkkodahjrlgghzeablmfezildnfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842628.876426-329-183658558478034/AnsiballZ_copy.py'
Jan 31 06:57:09 compute-1 sudo[114457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:09 compute-1 python3.9[114459]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842628.876426-329-183658558478034/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=b4fea7ceb01b8799b3517ed6d009f9fda90f0984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:57:09 compute-1 sudo[114457]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:10.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:10 compute-1 sudo[114609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtonpgsbchqprglhffsagjexannkoxse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842630.0370917-329-34146813333890/AnsiballZ_stat.py'
Jan 31 06:57:10 compute-1 sudo[114609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 06:57:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:10.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 06:57:10 compute-1 python3.9[114611]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:57:10 compute-1 sudo[114609]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:10 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:10 compute-1 ceph-mon[81728]: pgmap v392: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:10 compute-1 sudo[114732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfvjtxardfvqcwxvzjvdyrgwrordaeqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842630.0370917-329-34146813333890/AnsiballZ_copy.py'
Jan 31 06:57:10 compute-1 sudo[114732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:10 compute-1 python3.9[114734]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842630.0370917-329-34146813333890/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=0dd65ea80bde2935b665c1b68742c885268ebc5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:57:10 compute-1 sudo[114732]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:11 compute-1 sudo[114884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfqyobmlpmadhcqviymhehkcnmjrowur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842631.078561-329-169293485333569/AnsiballZ_stat.py'
Jan 31 06:57:11 compute-1 sudo[114884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:11 compute-1 python3.9[114886]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:57:11 compute-1 sudo[114884]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:11 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:11 compute-1 sudo[115007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldeazdtttvptouvspiqtbqvuxlsvzbdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842631.078561-329-169293485333569/AnsiballZ_copy.py'
Jan 31 06:57:11 compute-1 sudo[115007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:12 compute-1 python3.9[115009]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842631.078561-329-169293485333569/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=247c5e6d7a88c176f60dd12cf09fa43fc33a31d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:57:12 compute-1 sudo[115007]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:12.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:12.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:12 compute-1 sudo[115159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgpsdiqhlifyjpyygdpzlqtqpmabrlec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842632.2194915-459-152136782019571/AnsiballZ_file.py'
Jan 31 06:57:12 compute-1 sudo[115159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:12 compute-1 ceph-mon[81728]: pgmap v393: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:12 compute-1 python3.9[115161]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:57:12 compute-1 sudo[115159]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:12 compute-1 sudo[115311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryqreawuindtylkumulnbabjyuevmcgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842632.7052886-459-165379966157972/AnsiballZ_file.py'
Jan 31 06:57:12 compute-1 sudo[115311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:13 compute-1 python3.9[115313]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:57:13 compute-1 sudo[115311]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:13 compute-1 sudo[115463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imanwqtnwabgwdovavhpagtmgmlnfvtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842633.490723-501-226438333845845/AnsiballZ_stat.py'
Jan 31 06:57:13 compute-1 sudo[115463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:13 compute-1 python3.9[115465]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:57:13 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:57:13 compute-1 sudo[115463]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:14.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:14 compute-1 sudo[115586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egtohvvqqdheddbhbcdlbhcvhxpefyud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842633.490723-501-226438333845845/AnsiballZ_copy.py'
Jan 31 06:57:14 compute-1 sudo[115586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 06:57:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:14.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 06:57:14 compute-1 python3.9[115588]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842633.490723-501-226438333845845/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=3b57c685a72b5174fc338147e1dc1141cd2ccc76 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:57:14 compute-1 sudo[115586]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:14 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 374 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:57:14 compute-1 ceph-mon[81728]: pgmap v394: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:14 compute-1 sudo[115738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emgyytwsdnvycjqrrzoptaeabxsxoouy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842634.468077-501-5102035589579/AnsiballZ_stat.py'
Jan 31 06:57:14 compute-1 sudo[115738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:14 compute-1 python3.9[115740]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:57:14 compute-1 sudo[115738]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:15 compute-1 sudo[115861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxzwgsqfobtcopmfeeaqnduhvjdpjmaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842634.468077-501-5102035589579/AnsiballZ_copy.py'
Jan 31 06:57:15 compute-1 sudo[115861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:15 compute-1 python3.9[115863]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842634.468077-501-5102035589579/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=0dd65ea80bde2935b665c1b68742c885268ebc5d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:57:15 compute-1 sudo[115861]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:15 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:15 compute-1 sudo[116013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgpmlypoljkkmuexeolvsapuupptqteg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842635.4223022-501-71895962934260/AnsiballZ_stat.py'
Jan 31 06:57:15 compute-1 sudo[116013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:15 compute-1 python3.9[116015]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:57:15 compute-1 sudo[116013]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:16.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:16 compute-1 sudo[116136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wihijzuqlmdtonftxshteudobdshnsyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842635.4223022-501-71895962934260/AnsiballZ_copy.py'
Jan 31 06:57:16 compute-1 sudo[116136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:16.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:16 compute-1 python3.9[116138]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842635.4223022-501-71895962934260/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=54e74a98f3a7a6d416bf04c1f376c5f000556c4b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:57:16 compute-1 sudo[116136]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:16 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:16 compute-1 ceph-mon[81728]: pgmap v395: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:17 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:18.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:18 compute-1 sudo[116288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynaldslimaanoteiwamtkgiitajhambj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842637.9088395-680-63694910421521/AnsiballZ_file.py'
Jan 31 06:57:18 compute-1 sudo[116288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:18.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:18 compute-1 python3.9[116290]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:57:18 compute-1 sudo[116288]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:18 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:18 compute-1 ceph-mon[81728]: pgmap v396: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:18 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:57:18 compute-1 sudo[116440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmdrrxurwwujwtmqdibwuhpphxoypbgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842638.6626215-701-258784892893972/AnsiballZ_stat.py'
Jan 31 06:57:18 compute-1 sudo[116440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:19 compute-1 python3.9[116442]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:57:19 compute-1 sudo[116440]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:19 compute-1 sudo[116563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmkmctupqvorcsigahscoaqesxhfcpbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842638.6626215-701-258784892893972/AnsiballZ_copy.py'
Jan 31 06:57:19 compute-1 sudo[116563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:19 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:19 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 379 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:57:19 compute-1 python3.9[116565]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842638.6626215-701-258784892893972/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5c0903ce7d45a242e5d722311138f253d8bd3b6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:57:19 compute-1 sudo[116563]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:20.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 06:57:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:20.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 06:57:20 compute-1 sudo[116715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otkgrevjkkwuwqwehgwypugdorxolhpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842640.3679748-750-161438781948239/AnsiballZ_file.py'
Jan 31 06:57:20 compute-1 sudo[116715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:20 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:20 compute-1 ceph-mon[81728]: pgmap v397: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:20 compute-1 python3.9[116717]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:57:20 compute-1 sudo[116715]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:21 compute-1 sudo[116867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcikxashpsqxeytytntvuwlhegfdubjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842641.211463-776-683345334545/AnsiballZ_stat.py'
Jan 31 06:57:21 compute-1 sudo[116867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:21 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:21 compute-1 python3.9[116869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:57:21 compute-1 sudo[116867]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:22.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:22.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:22 compute-1 sudo[116990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woymxuusiropttwpsrgxresrrinoldod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842641.211463-776-683345334545/AnsiballZ_copy.py'
Jan 31 06:57:22 compute-1 sudo[116990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:22 compute-1 python3.9[116992]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842641.211463-776-683345334545/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5c0903ce7d45a242e5d722311138f253d8bd3b6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:57:22 compute-1 sudo[116990]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:22 compute-1 ceph-mon[81728]: pgmap v398: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:23 compute-1 sudo[117142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynxxzrsrqiykffgxhlrcnxhahmfmdabi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842642.9568367-825-16290083050037/AnsiballZ_file.py'
Jan 31 06:57:23 compute-1 sudo[117142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:23 compute-1 python3.9[117144]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:57:23 compute-1 sudo[117142]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:23 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:57:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:24 compute-1 sudo[117294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unyrvgpwvffgvdjmlhyjwvkjuawbwwmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842643.8817022-846-21517659908027/AnsiballZ_stat.py'
Jan 31 06:57:24 compute-1 sudo[117294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:24.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:24.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:24 compute-1 python3.9[117296]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:57:24 compute-1 sudo[117294]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:24 compute-1 sudo[117417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjydnalbtuizwfypnbndzqiixivxjeta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842643.8817022-846-21517659908027/AnsiballZ_copy.py'
Jan 31 06:57:24 compute-1 sudo[117417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:24 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 384 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:57:24 compute-1 ceph-mon[81728]: pgmap v399: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:25 compute-1 python3.9[117419]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842643.8817022-846-21517659908027/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5c0903ce7d45a242e5d722311138f253d8bd3b6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:57:25 compute-1 sudo[117417]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:25 compute-1 sudo[117569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbllfaechfinyiecropwhjtrlsltnxaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842645.3145895-895-182752821486272/AnsiballZ_file.py'
Jan 31 06:57:25 compute-1 sudo[117569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:25 compute-1 python3.9[117571]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:57:25 compute-1 sudo[117569]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:25 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:26.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 06:57:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:26.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 06:57:26 compute-1 sudo[117721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcommadakkjdcseyybxvzshuvfwzlals ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842646.0135922-922-196387178567830/AnsiballZ_stat.py'
Jan 31 06:57:26 compute-1 sudo[117721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:26 compute-1 python3.9[117723]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:57:26 compute-1 sudo[117721]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:26 compute-1 sudo[117844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svhskioqstocdffknlmfyhxxjvuwluvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842646.0135922-922-196387178567830/AnsiballZ_copy.py'
Jan 31 06:57:26 compute-1 sudo[117844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:27 compute-1 python3.9[117846]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842646.0135922-922-196387178567830/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5c0903ce7d45a242e5d722311138f253d8bd3b6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:57:27 compute-1 sudo[117844]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:27 compute-1 ceph-mon[81728]: pgmap v400: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:27 compute-1 sudo[117996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rckotlczinwwrtnzycrbmqhqiyykhqsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842647.375326-970-270197824992818/AnsiballZ_file.py'
Jan 31 06:57:27 compute-1 sudo[117996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:27 compute-1 python3.9[117998]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:57:27 compute-1 sudo[117996]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:28 compute-1 sudo[118079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:57:28 compute-1 sudo[118079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:57:28 compute-1 sudo[118079]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:28 compute-1 sudo[118123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:57:28 compute-1 sudo[118123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:57:28 compute-1 sudo[118123]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:28.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:28 compute-1 sudo[118172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:57:28 compute-1 sudo[118172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:57:28 compute-1 sudo[118172]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:28 compute-1 sudo[118231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqxxtcmdqszrdfvkllxigsahnpqftafm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842647.9711194-994-83936242611236/AnsiballZ_stat.py'
Jan 31 06:57:28 compute-1 sudo[118231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:28 compute-1 sudo[118217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 06:57:28 compute-1 sudo[118217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:57:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:28.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:28 compute-1 python3.9[118248]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:57:28 compute-1 sudo[118231]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:28 compute-1 sudo[118217]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:28 compute-1 sudo[118401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xobypvwnyzzwtvhwhuobpidqlpxzrsyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842647.9711194-994-83936242611236/AnsiballZ_copy.py'
Jan 31 06:57:28 compute-1 sudo[118401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:28 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:57:28 compute-1 python3.9[118403]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842647.9711194-994-83936242611236/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5c0903ce7d45a242e5d722311138f253d8bd3b6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:57:28 compute-1 sudo[118401]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:29 compute-1 sudo[118553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbadenzbhlzjuimntcvxajabrgykcfnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842649.1247141-1042-156156641367300/AnsiballZ_file.py'
Jan 31 06:57:29 compute-1 sudo[118553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:29 compute-1 ceph-mon[81728]: pgmap v401: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:29 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:29 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 389 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:57:29 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:57:29 compute-1 python3.9[118555]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:57:29 compute-1 sudo[118553]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:30 compute-1 sudo[118705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-autlncxmbkjgasnunnnwqwdkhghkhmle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842649.7959626-1068-286908373693/AnsiballZ_stat.py'
Jan 31 06:57:30 compute-1 sudo[118705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:30.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:30 compute-1 python3.9[118707]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:57:30 compute-1 sudo[118705]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 06:57:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:30.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 06:57:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:57:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:57:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 06:57:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:57:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 06:57:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 06:57:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:57:30 compute-1 ceph-mon[81728]: pgmap v402: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:30 compute-1 sudo[118828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjxtwzicxzsvoezqwswtvxmchpqfbndm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842649.7959626-1068-286908373693/AnsiballZ_copy.py'
Jan 31 06:57:30 compute-1 sudo[118828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:30 compute-1 python3.9[118830]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842649.7959626-1068-286908373693/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5c0903ce7d45a242e5d722311138f253d8bd3b6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:57:30 compute-1 sudo[118828]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:32 compute-1 sshd-session[112597]: Connection closed by 192.168.122.30 port 45396
Jan 31 06:57:32 compute-1 sshd-session[112594]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:57:32 compute-1 systemd[1]: session-43.scope: Deactivated successfully.
Jan 31 06:57:32 compute-1 systemd[1]: session-43.scope: Consumed 17.512s CPU time.
Jan 31 06:57:32 compute-1 systemd-logind[788]: Session 43 logged out. Waiting for processes to exit.
Jan 31 06:57:32 compute-1 systemd-logind[788]: Removed session 43.
Jan 31 06:57:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:32.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:32.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:33 compute-1 ceph-mon[81728]: pgmap v403: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:33 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:57:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:34.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:34.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:35 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 394 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:57:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:36.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:36.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:36 compute-1 ceph-mon[81728]: pgmap v404: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:37 compute-1 sudo[118855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:57:37 compute-1 sudo[118855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:57:37 compute-1 sudo[118855]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:37 compute-1 sudo[118880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:57:37 compute-1 sudo[118880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:57:37 compute-1 sudo[118880]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:38 compute-1 ceph-mon[81728]: pgmap v405: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:38 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:38 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:57:38 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:57:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:38.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:38.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:38 compute-1 sshd-session[118905]: Accepted publickey for zuul from 192.168.122.30 port 55444 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:57:38 compute-1 systemd-logind[788]: New session 44 of user zuul.
Jan 31 06:57:38 compute-1 systemd[1]: Started Session 44 of User zuul.
Jan 31 06:57:38 compute-1 sshd-session[118905]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:57:38 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:57:39 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:39 compute-1 sudo[119058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfbzlzbnubgeknfqmymibynfgjcggxut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842658.7656052-27-262001066827463/AnsiballZ_file.py'
Jan 31 06:57:39 compute-1 sudo[119058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:39 compute-1 python3.9[119060]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:57:39 compute-1 sudo[119058]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:40 compute-1 sudo[119210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjaiikwvouimcivlrvilpulypxmusozd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842659.6213255-63-91991684828926/AnsiballZ_stat.py'
Jan 31 06:57:40 compute-1 sudo[119210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:40 compute-1 ceph-mon[81728]: pgmap v406: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:40 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 399 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:57:40 compute-1 python3.9[119212]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:57:40 compute-1 sudo[119210]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:40.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:57:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:40.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:57:40 compute-1 sudo[119333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imlhcbwgtucdmqvivyjexgzsuitugrct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842659.6213255-63-91991684828926/AnsiballZ_copy.py'
Jan 31 06:57:40 compute-1 sudo[119333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:40 compute-1 python3.9[119335]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842659.6213255-63-91991684828926/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=6179fb8736d86099e122798f305813e20025174a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:57:40 compute-1 sudo[119333]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:41 compute-1 sudo[119485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzhypvbpnncmuxnisntsmvfzpwwajcrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842660.8896046-63-240150898119586/AnsiballZ_stat.py'
Jan 31 06:57:41 compute-1 sudo[119485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:41 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:41 compute-1 python3.9[119487]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:57:41 compute-1 sudo[119485]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:41 compute-1 sudo[119608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nctcbpfqupcxljjxghnedgxuwvyowgto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842660.8896046-63-240150898119586/AnsiballZ_copy.py'
Jan 31 06:57:41 compute-1 sudo[119608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:41 compute-1 python3.9[119610]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842660.8896046-63-240150898119586/.source.conf _original_basename=ceph.conf follow=False checksum=3fbed2da8eef23ae823cb444b6d55e1b9e218e83 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:57:41 compute-1 sudo[119608]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:42.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:42 compute-1 ceph-mon[81728]: pgmap v407: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:42 compute-1 sshd-session[118908]: Connection closed by 192.168.122.30 port 55444
Jan 31 06:57:42 compute-1 sshd-session[118905]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:57:42 compute-1 systemd[1]: session-44.scope: Deactivated successfully.
Jan 31 06:57:42 compute-1 systemd[1]: session-44.scope: Consumed 2.022s CPU time.
Jan 31 06:57:42 compute-1 systemd-logind[788]: Session 44 logged out. Waiting for processes to exit.
Jan 31 06:57:42 compute-1 systemd-logind[788]: Removed session 44.
Jan 31 06:57:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:42.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:43 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:43 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:57:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:44.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:44 compute-1 ceph-mon[81728]: pgmap v408: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:44.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:45 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:45 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 404 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:57:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:46.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:46 compute-1 ceph-mon[81728]: pgmap v409: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:46.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:47 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:48.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:48 compute-1 ceph-mon[81728]: pgmap v410: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:48 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:48.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:48 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:57:48 compute-1 sshd-session[119635]: Accepted publickey for zuul from 192.168.122.30 port 58112 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:57:48 compute-1 systemd-logind[788]: New session 45 of user zuul.
Jan 31 06:57:48 compute-1 systemd[1]: Started Session 45 of User zuul.
Jan 31 06:57:48 compute-1 sshd-session[119635]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:57:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:49 compute-1 sshd-session[119691]: Received disconnect from 45.148.10.141 port 51188:11:  [preauth]
Jan 31 06:57:49 compute-1 sshd-session[119691]: Disconnected from authenticating user root 45.148.10.141 port 51188 [preauth]
Jan 31 06:57:49 compute-1 python3.9[119790]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:57:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:50.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:50.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:50 compute-1 ceph-mon[81728]: pgmap v411: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:50 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 409 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:57:50 compute-1 sudo[119944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzcwzwlaylibnwmcmgcyndmlybpeussd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842670.443117-63-186319041096464/AnsiballZ_file.py'
Jan 31 06:57:50 compute-1 sudo[119944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:50 compute-1 python3.9[119946]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:57:51 compute-1 sudo[119944]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:51 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:51 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:51 compute-1 sudo[120096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzdjdxrbvvoasjipzerihjwapuwzhomc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842671.1316094-63-43492470640321/AnsiballZ_file.py'
Jan 31 06:57:51 compute-1 sudo[120096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:51 compute-1 python3.9[120098]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:57:51 compute-1 sudo[120096]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:52.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:52 compute-1 python3.9[120248]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:57:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:52.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:52 compute-1 ceph-mon[81728]: pgmap v412: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:52 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:52 compute-1 sudo[120398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxbnozomiivacvojebvccdwhnnrptxqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842672.4545743-132-21726958066332/AnsiballZ_seboolean.py'
Jan 31 06:57:52 compute-1 sudo[120398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:53 compute-1 python3.9[120400]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 06:57:53 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:57:53 compute-1 ceph-mon[81728]: pgmap v413: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:53 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:54 compute-1 sudo[120398]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:54.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:54.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:54 compute-1 sudo[120554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zorpxkidwjwhemdoqktaypqgkuczlpnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842674.543966-162-163037098456609/AnsiballZ_setup.py'
Jan 31 06:57:54 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 31 06:57:54 compute-1 sudo[120554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:55 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 414 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:57:55 compute-1 python3.9[120556]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:57:55 compute-1 sudo[120554]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:55 compute-1 sudo[120638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gazahbvsskggjpggizkkignjnqdyibyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842674.543966-162-163037098456609/AnsiballZ_dnf.py'
Jan 31 06:57:55 compute-1 sudo[120638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:55 compute-1 python3.9[120640]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:57:56 compute-1 ceph-mon[81728]: pgmap v414: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:57:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:56.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:57:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:57:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:56.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:57:57 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:57 compute-1 sudo[120638]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:58 compute-1 sudo[120791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vafdvbjobwgzgvndgsapnhqbxfgebabo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842677.524935-198-88271192460323/AnsiballZ_systemd.py'
Jan 31 06:57:58 compute-1 sudo[120791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:58 compute-1 ceph-mon[81728]: pgmap v415: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:57:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:57:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:57:58.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:57:58 compute-1 python3.9[120793]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 06:57:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:57:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:57:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:57:58.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:57:58 compute-1 sudo[120791]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:58 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:57:59 compute-1 sudo[120946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbybnizjooytpqirbkiudhoirfgyxwwy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769842678.5888414-222-168812587419702/AnsiballZ_edpm_nftables_snippet.py'
Jan 31 06:57:59 compute-1 sudo[120946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:59 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:57:59 compute-1 python3[120948]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 31 06:57:59 compute-1 sudo[120946]: pam_unix(sudo:session): session closed for user root
Jan 31 06:57:59 compute-1 sudo[121098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbgdjveslhbxaeedjkswcknnbuqnbhgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842679.5401778-249-246876278705465/AnsiballZ_file.py'
Jan 31 06:57:59 compute-1 sudo[121098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:57:59 compute-1 python3.9[121100]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:00 compute-1 sudo[121098]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:00 compute-1 ceph-mon[81728]: pgmap v416: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:00 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 419 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:58:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:00.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:00.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:00 compute-1 sudo[121250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnfmdyyjffybxawbzjvpkuvrqhshjoru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842680.207425-273-23561499633320/AnsiballZ_stat.py'
Jan 31 06:58:00 compute-1 sudo[121250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:00 compute-1 python3.9[121252]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:58:00 compute-1 sudo[121250]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:01 compute-1 sudo[121328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btinyjlzbbvslfdwqgnzwkaaaclxqbsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842680.207425-273-23561499633320/AnsiballZ_file.py'
Jan 31 06:58:01 compute-1 sudo[121328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:01 compute-1 python3.9[121330]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:01 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:01 compute-1 sudo[121328]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:01 compute-1 sudo[121480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fctjfucatzmeqxlhdrixmpjrpjbbzsmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842681.4398153-309-40349751285941/AnsiballZ_stat.py'
Jan 31 06:58:01 compute-1 sudo[121480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:01 compute-1 python3.9[121482]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:58:01 compute-1 sudo[121480]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:02 compute-1 sudo[121558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kofujkfuieehtkaitdxnidlettivvrqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842681.4398153-309-40349751285941/AnsiballZ_file.py'
Jan 31 06:58:02 compute-1 sudo[121558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:02 compute-1 ceph-mon[81728]: pgmap v417: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:02 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:02.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:02 compute-1 python3.9[121560]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.t4k7wua7 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:58:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:02.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:58:02 compute-1 sudo[121558]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:02 compute-1 sudo[121710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syqjjhfzosldjoznfbtkbzudslaisxsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842682.5043976-345-235608633578308/AnsiballZ_stat.py'
Jan 31 06:58:02 compute-1 sudo[121710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:02 compute-1 python3.9[121712]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:58:02 compute-1 sudo[121710]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:03 compute-1 sudo[121788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aslrxwmslwhtuoztftyaggtlyciyxpbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842682.5043976-345-235608633578308/AnsiballZ_file.py'
Jan 31 06:58:03 compute-1 sudo[121788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:03 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:03 compute-1 python3.9[121790]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:03 compute-1 sudo[121788]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:03 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:58:04 compute-1 sudo[121940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udtcnslrrqhtlhghavcsmcvbdntzxhvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842683.6454563-384-224941460352115/AnsiballZ_command.py'
Jan 31 06:58:04 compute-1 sudo[121940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:58:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:04.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:58:04 compute-1 python3.9[121942]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:58:04 compute-1 ceph-mon[81728]: pgmap v418: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:04 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:04 compute-1 sudo[121940]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:58:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:04.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:58:04 compute-1 sudo[122093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccoxueijwjbmvysjhethkgkqdrjntckk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769842684.4049094-408-56524928256365/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 06:58:04 compute-1 sudo[122093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:04 compute-1 python3[122095]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 06:58:04 compute-1 sudo[122093]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:05 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:05 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 424 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:58:05 compute-1 sudo[122245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toktrtafatnkgpzyuoketycjrlkqrkqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842685.155357-432-278679559928129/AnsiballZ_stat.py'
Jan 31 06:58:05 compute-1 sudo[122245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:05 compute-1 python3.9[122247]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:58:05 compute-1 sudo[122245]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:06 compute-1 sudo[122370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elcyyiwtlstjjpxyrqbxqkgguzpwccvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842685.155357-432-278679559928129/AnsiballZ_copy.py'
Jan 31 06:58:06 compute-1 sudo[122370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:06 compute-1 python3.9[122372]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842685.155357-432-278679559928129/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:06 compute-1 sudo[122370]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:06.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:58:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:06.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:58:06 compute-1 ceph-mon[81728]: pgmap v419: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:06 compute-1 sudo[122522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeavjlljomeaqjwmgyxyrprndpgojuag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842686.3989828-477-21111286986936/AnsiballZ_stat.py'
Jan 31 06:58:06 compute-1 sudo[122522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:06 compute-1 python3.9[122524]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:58:06 compute-1 sudo[122522]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:07 compute-1 sudo[122647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lihkjejlsngedgrysrrdorekqemfndka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842686.3989828-477-21111286986936/AnsiballZ_copy.py'
Jan 31 06:58:07 compute-1 sudo[122647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:07 compute-1 python3.9[122649]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842686.3989828-477-21111286986936/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:07 compute-1 sudo[122647]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:07 compute-1 sudo[122799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcrwuezzhmeoyivyncizzhqsaplrpsxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842687.5384576-522-56172412587056/AnsiballZ_stat.py'
Jan 31 06:58:07 compute-1 sudo[122799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:07 compute-1 python3.9[122801]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:58:08 compute-1 sudo[122799]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:08.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:08 compute-1 sudo[122924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmipxzlzvtxsrjszptocrqsqytcttiki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842687.5384576-522-56172412587056/AnsiballZ_copy.py'
Jan 31 06:58:08 compute-1 sudo[122924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:08.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:08 compute-1 ceph-mon[81728]: pgmap v420: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:08 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:08 compute-1 python3.9[122926]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842687.5384576-522-56172412587056/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:08 compute-1 sudo[122924]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:08 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:58:08 compute-1 sudo[123076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqasvhitqxqsmntimkblopihymfgvtjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842688.6491568-567-222055138135236/AnsiballZ_stat.py'
Jan 31 06:58:08 compute-1 sudo[123076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:09 compute-1 python3.9[123078]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:58:09 compute-1 sudo[123076]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:09 compute-1 sudo[123201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuljfmtzssumoomhaietrgnbmgniiufj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842688.6491568-567-222055138135236/AnsiballZ_copy.py'
Jan 31 06:58:09 compute-1 sudo[123201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:09 compute-1 ceph-mon[81728]: pgmap v421: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:09 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:09 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 429 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:58:09 compute-1 python3.9[123203]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842688.6491568-567-222055138135236/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:09 compute-1 sudo[123201]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:10 compute-1 sudo[123353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adusxaesvaqqqgehyvdiozdwvdgshebf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842689.8403404-612-4580887963327/AnsiballZ_stat.py'
Jan 31 06:58:10 compute-1 sudo[123353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:10.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:58:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:10.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:58:10 compute-1 python3.9[123355]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:58:10 compute-1 sudo[123353]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:10 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:10 compute-1 sudo[123478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmiojbxpdnlddmlqpygcngoqkcdzkpmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842689.8403404-612-4580887963327/AnsiballZ_copy.py'
Jan 31 06:58:10 compute-1 sudo[123478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:10 compute-1 python3.9[123480]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842689.8403404-612-4580887963327/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:10 compute-1 sudo[123478]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:11 compute-1 sudo[123630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzornhryuxjsnrfvweeoxpfmgqigkobq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842691.0423267-657-253935142944680/AnsiballZ_file.py'
Jan 31 06:58:11 compute-1 sudo[123630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:11 compute-1 python3.9[123632]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:11 compute-1 sudo[123630]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:11 compute-1 ceph-mon[81728]: pgmap v422: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:11 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:11 compute-1 sudo[123782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-semtxxxuehhhtmmligsonufvnxekwkgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842691.6820855-681-207103885447791/AnsiballZ_command.py'
Jan 31 06:58:11 compute-1 sudo[123782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:12 compute-1 python3.9[123784]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:58:12 compute-1 sudo[123782]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:12.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:58:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:12.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:58:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:12 compute-1 sudo[123937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjjtznbzixxomdsdidmzjxncsowhunvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842692.337922-705-158006945267531/AnsiballZ_blockinfile.py'
Jan 31 06:58:12 compute-1 sudo[123937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:12 compute-1 python3.9[123939]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:12 compute-1 sudo[123937]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:13 compute-1 sudo[124089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkdbbkvnpmhamxdxipbtcmnirxgtiack ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842693.2094767-732-249645756044706/AnsiballZ_command.py'
Jan 31 06:58:13 compute-1 sudo[124089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:13 compute-1 python3.9[124091]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:58:13 compute-1 sudo[124089]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:13 compute-1 ceph-mon[81728]: pgmap v423: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:13 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:58:14 compute-1 sudo[124242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vshgabmwlksmhbfbinutavhrsmtasrch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842693.8474276-756-239354952061635/AnsiballZ_stat.py'
Jan 31 06:58:14 compute-1 sudo[124242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:14 compute-1 python3.9[124244]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:58:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:58:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:14.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:58:14 compute-1 sudo[124242]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:14.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:14 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 434 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:58:14 compute-1 sudo[124396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opswzmirmkybvomoamcafnhedciukeiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842694.4476638-780-155762474454846/AnsiballZ_command.py'
Jan 31 06:58:14 compute-1 sudo[124396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:14 compute-1 python3.9[124398]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:58:14 compute-1 sudo[124396]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:15 compute-1 sudo[124551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfzistxayfrtrpacaetlsxgyhcdccwva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842695.1196368-804-134818738215259/AnsiballZ_file.py'
Jan 31 06:58:15 compute-1 sudo[124551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:15 compute-1 python3.9[124553]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:15 compute-1 sudo[124551]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:15 compute-1 ceph-mon[81728]: pgmap v424: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:15 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:16.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:58:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:16.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:58:16 compute-1 python3.9[124703]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:58:16 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:17 compute-1 sudo[124854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdxqcplhuoaxyfmzaacxqyemxvxmkpwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842697.537922-924-229608642858684/AnsiballZ_command.py'
Jan 31 06:58:17 compute-1 sudo[124854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:17 compute-1 ceph-mon[81728]: pgmap v425: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:17 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:17 compute-1 python3.9[124856]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:58:17 compute-1 ovs-vsctl[124857]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 31 06:58:17 compute-1 sudo[124854]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:18.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:58:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:18.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:58:18 compute-1 sudo[125007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quyzkxzwywifylwwkttokjkselaobfti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842698.2347808-952-131892230186025/AnsiballZ_command.py'
Jan 31 06:58:18 compute-1 sudo[125007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:18 compute-1 python3.9[125009]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:58:18 compute-1 sudo[125007]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:18 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:18 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:58:19 compute-1 sudo[125162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlqqlhrrjcadgfxpelplqdcivlmiegrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842699.0763407-975-51286209900742/AnsiballZ_command.py'
Jan 31 06:58:19 compute-1 sudo[125162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:19 compute-1 python3.9[125164]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:58:19 compute-1 ovs-vsctl[125165]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 31 06:58:19 compute-1 sudo[125162]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:19 compute-1 ceph-mon[81728]: pgmap v426: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:19 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:19 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 439 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:58:20 compute-1 python3.9[125315]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:58:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:20.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:58:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:20.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:58:20 compute-1 sudo[125467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbqtjaxmlldfzdpyuekdukbfxyqlvkfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842700.4362798-1026-198217607695686/AnsiballZ_file.py'
Jan 31 06:58:20 compute-1 sudo[125467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:20 compute-1 python3.9[125469]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:58:20 compute-1 sudo[125467]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:20 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:20 compute-1 ceph-mon[81728]: pgmap v427: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:21 compute-1 sudo[125619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpqmclyhvignlkmcrlfekqusxtnplday ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842701.1245656-1050-165292094940309/AnsiballZ_stat.py'
Jan 31 06:58:21 compute-1 sudo[125619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:21 compute-1 python3.9[125621]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:58:21 compute-1 sudo[125619]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:21 compute-1 sudo[125697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srngjnjeawhqdmtrwapntqjakspnuyzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842701.1245656-1050-165292094940309/AnsiballZ_file.py'
Jan 31 06:58:21 compute-1 sudo[125697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:21 compute-1 python3.9[125699]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:58:21 compute-1 sudo[125697]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:22.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:22 compute-1 sudo[125849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckhrhugrwxexehcxfcakofnhfctofoxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842702.085887-1050-255550758590462/AnsiballZ_stat.py'
Jan 31 06:58:22 compute-1 sudo[125849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:22.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:22 compute-1 python3.9[125851]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:58:22 compute-1 sudo[125849]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:22 compute-1 sudo[125927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grtptvaiuawxbmpenwrycofgwqxyzkvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842702.085887-1050-255550758590462/AnsiballZ_file.py'
Jan 31 06:58:22 compute-1 sudo[125927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:22 compute-1 python3.9[125929]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:58:22 compute-1 sudo[125927]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:23 compute-1 ceph-mon[81728]: pgmap v428: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:23 compute-1 sudo[126079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruospgqauqerntqzsmfoswkdxxilnplf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842703.2864976-1120-105307691093249/AnsiballZ_file.py'
Jan 31 06:58:23 compute-1 sudo[126079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:23 compute-1 python3.9[126081]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:23 compute-1 sudo[126079]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:23 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:58:24 compute-1 sudo[126231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhcyjfdhhmrfxbknwqdibeqzbxolbtqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842703.8556273-1143-120590482871833/AnsiballZ_stat.py'
Jan 31 06:58:24 compute-1 sudo[126231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:24 compute-1 python3.9[126233]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:58:24 compute-1 sudo[126231]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:24.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:58:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:24.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:58:24 compute-1 sudo[126309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtpllkdumxxgufphdyzieqzrdimtyrtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842703.8556273-1143-120590482871833/AnsiballZ_file.py'
Jan 31 06:58:24 compute-1 sudo[126309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:24 compute-1 python3.9[126311]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:24 compute-1 sudo[126309]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:25 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:25 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 444 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:58:25 compute-1 ceph-mon[81728]: pgmap v429: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:25 compute-1 sudo[126461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrbaizgrtswjlxoiirornjquopfmgubm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842704.8641353-1179-254843687300581/AnsiballZ_stat.py'
Jan 31 06:58:25 compute-1 sudo[126461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:25 compute-1 python3.9[126463]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:58:25 compute-1 sudo[126461]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:25 compute-1 sudo[126539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnmhpazdpsbzigrotbtrydocwbhyyowz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842704.8641353-1179-254843687300581/AnsiballZ_file.py'
Jan 31 06:58:25 compute-1 sudo[126539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:25 compute-1 python3.9[126541]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:25 compute-1 sudo[126539]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:26 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:26 compute-1 sudo[126691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gptynzhpcpwimewghjdblpmzcpsgjgmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842705.9023528-1215-21139514911673/AnsiballZ_systemd.py'
Jan 31 06:58:26 compute-1 sudo[126691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:26.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:26.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:26 compute-1 python3.9[126693]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:58:26 compute-1 systemd[1]: Reloading.
Jan 31 06:58:26 compute-1 systemd-rc-local-generator[126713]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:58:26 compute-1 systemd-sysv-generator[126720]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:58:26 compute-1 sudo[126691]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:27 compute-1 ceph-mon[81728]: pgmap v430: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:27 compute-1 sudo[126881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cngeoeqbxbqfprmarfjbsoeygqdtsfsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842706.959948-1239-74129842860069/AnsiballZ_stat.py'
Jan 31 06:58:27 compute-1 sudo[126881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:27 compute-1 python3.9[126883]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:58:27 compute-1 sudo[126881]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:27 compute-1 sudo[126959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zanitgsjkjtszxvhmnzfuybohciwbtst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842706.959948-1239-74129842860069/AnsiballZ_file.py'
Jan 31 06:58:27 compute-1 sudo[126959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:27 compute-1 python3.9[126961]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:27 compute-1 sudo[126959]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:28 compute-1 sudo[127111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjqioivwjkarlyntzdvmdfoygabfetjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842708.028614-1275-48277955410531/AnsiballZ_stat.py'
Jan 31 06:58:28 compute-1 sudo[127111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:28.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:58:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:28.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:58:28 compute-1 python3.9[127113]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:58:28 compute-1 sudo[127111]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:28 compute-1 sudo[127189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fashnilktdztmmnrnzgrdjcnrroyzpes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842708.028614-1275-48277955410531/AnsiballZ_file.py'
Jan 31 06:58:28 compute-1 sudo[127189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:28 compute-1 python3.9[127191]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:28 compute-1 sudo[127189]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:28 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:58:29 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:29 compute-1 ceph-mon[81728]: pgmap v431: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:29 compute-1 sudo[127341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjtuqgmkstubkcwzzmvcxgyfwkuyrnqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842709.101475-1311-225366347380705/AnsiballZ_systemd.py'
Jan 31 06:58:29 compute-1 sudo[127341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:29 compute-1 python3.9[127343]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:58:29 compute-1 systemd[1]: Reloading.
Jan 31 06:58:29 compute-1 systemd-rc-local-generator[127370]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:58:29 compute-1 systemd-sysv-generator[127373]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:58:29 compute-1 systemd[1]: Starting Create netns directory...
Jan 31 06:58:29 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 06:58:29 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 06:58:29 compute-1 systemd[1]: Finished Create netns directory.
Jan 31 06:58:30 compute-1 sudo[127341]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:30 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 449 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:58:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:58:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:30.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:58:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:58:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:30.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:58:30 compute-1 sudo[127534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykjkwlwxvpcvjpvjrmushtapetixjmzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842710.3393784-1341-118326271797401/AnsiballZ_file.py'
Jan 31 06:58:30 compute-1 sudo[127534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:30 compute-1 python3.9[127536]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:58:30 compute-1 sudo[127534]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:31 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:31 compute-1 ceph-mon[81728]: pgmap v432: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:31 compute-1 sudo[127686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qylscjqkswqupsxqwjuihesldpxfmwrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842710.9900513-1365-19759319273229/AnsiballZ_stat.py'
Jan 31 06:58:31 compute-1 sudo[127686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:31 compute-1 python3.9[127688]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:58:31 compute-1 sudo[127686]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:31 compute-1 sudo[127809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnbmitwbvjkeewtusyiffhqvtdrddzyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842710.9900513-1365-19759319273229/AnsiballZ_copy.py'
Jan 31 06:58:31 compute-1 sudo[127809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:32 compute-1 python3.9[127811]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842710.9900513-1365-19759319273229/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:58:32 compute-1 sudo[127809]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:32.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:32.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:32 compute-1 sudo[127961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrgxvvutkhqbdwcriecepcmybspxmfnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842712.435312-1416-139274763462325/AnsiballZ_file.py'
Jan 31 06:58:32 compute-1 sudo[127961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:32 compute-1 python3.9[127963]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:32 compute-1 sudo[127961]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:33 compute-1 ceph-mon[81728]: pgmap v433: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:33 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:58:34 compute-1 sudo[128113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsyelxkxgglpcktcrdzhnqckhotrzemz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842713.8289874-1440-274626888642206/AnsiballZ_file.py'
Jan 31 06:58:34 compute-1 sudo[128113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:34 compute-1 python3.9[128115]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:58:34 compute-1 sudo[128113]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:34.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:34.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:34 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 454 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:58:34 compute-1 ceph-mon[81728]: pgmap v434: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:34 compute-1 sudo[128265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blznylvfxwbhzuowuhpywwldzpvwxhrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842714.5912929-1464-218762913341598/AnsiballZ_stat.py'
Jan 31 06:58:34 compute-1 sudo[128265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:34 compute-1 python3.9[128267]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:58:35 compute-1 sudo[128265]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:35 compute-1 sudo[128388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnxxlaqowcwfuueigzgrpiqyzfdrudxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842714.5912929-1464-218762913341598/AnsiballZ_copy.py'
Jan 31 06:58:35 compute-1 sudo[128388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:35 compute-1 python3.9[128390]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842714.5912929-1464-218762913341598/.source.json _original_basename=.bc6g6nid follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:35 compute-1 sudo[128388]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:36 compute-1 python3.9[128540]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:36.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:36.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:36 compute-1 ceph-mon[81728]: pgmap v435: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:37 compute-1 sudo[128812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:58:37 compute-1 sudo[128812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:58:37 compute-1 sudo[128812]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:37 compute-1 sudo[128837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:58:37 compute-1 sudo[128837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:58:37 compute-1 sudo[128837]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:37 compute-1 sudo[128879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:58:37 compute-1 sudo[128879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:58:37 compute-1 sudo[128879]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:37 compute-1 sudo[128911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 06:58:37 compute-1 sudo[128911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:58:37 compute-1 sudo[128911]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:37 compute-1 sudo[129008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:58:37 compute-1 sudo[129008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:58:37 compute-1 sudo[129008]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:38 compute-1 sudo[129033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:58:38 compute-1 sudo[129033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:58:38 compute-1 sudo[129033]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:38 compute-1 sudo[129081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:58:38 compute-1 sudo[129081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:58:38 compute-1 sudo[129081]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:38 compute-1 sudo[129109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 06:58:38 compute-1 sudo[129109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:58:38 compute-1 sudo[129181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgkxzmfozetbstlrjkllhekdnnsvbpsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842717.7659566-1584-71042522960716/AnsiballZ_container_config_data.py'
Jan 31 06:58:38 compute-1 sudo[129181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:38 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:38 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:58:38 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:58:38 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:58:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:38.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:38.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:38 compute-1 python3.9[129183]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 31 06:58:38 compute-1 sudo[129181]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:38 compute-1 sudo[129109]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:38 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:58:38 compute-1 sshd-session[129237]: Invalid user ethereum from 2.57.122.238 port 58462
Jan 31 06:58:39 compute-1 sshd-session[129237]: Connection closed by invalid user ethereum 2.57.122.238 port 58462 [preauth]
Jan 31 06:58:39 compute-1 sudo[129366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhgydlebwbvqbijugkdvrvhaqnqigoic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842718.8161101-1617-3800961810837/AnsiballZ_container_config_hash.py'
Jan 31 06:58:39 compute-1 sudo[129366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:39 compute-1 python3.9[129368]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 06:58:39 compute-1 sudo[129366]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:39 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:39 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:58:39 compute-1 ceph-mon[81728]: pgmap v436: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:39 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:58:39 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 06:58:39 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:58:39 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 06:58:39 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 06:58:39 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:58:39.595000) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842719595099, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1488, "num_deletes": 251, "total_data_size": 2688033, "memory_usage": 2740672, "flush_reason": "Manual Compaction"}
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842719608550, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 1754197, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11145, "largest_seqno": 12628, "table_properties": {"data_size": 1748341, "index_size": 2931, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14850, "raw_average_key_size": 20, "raw_value_size": 1735467, "raw_average_value_size": 2393, "num_data_blocks": 128, "num_entries": 725, "num_filter_entries": 725, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842629, "oldest_key_time": 1769842629, "file_creation_time": 1769842719, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 13579 microseconds, and 4276 cpu microseconds.
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:58:39.608602) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 1754197 bytes OK
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:58:39.608620) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:58:39.611156) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:58:39.611169) EVENT_LOG_v1 {"time_micros": 1769842719611165, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:58:39.611187) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2680966, prev total WAL file size 2680966, number of live WAL files 2.
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:58:39.611650) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(1713KB)], [21(7877KB)]
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842719611676, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 9821087, "oldest_snapshot_seqno": -1}
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4507 keys, 7570231 bytes, temperature: kUnknown
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842719702694, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 7570231, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7539526, "index_size": 18307, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 111955, "raw_average_key_size": 24, "raw_value_size": 7457215, "raw_average_value_size": 1654, "num_data_blocks": 783, "num_entries": 4507, "num_filter_entries": 4507, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842193, "oldest_key_time": 0, "file_creation_time": 1769842719, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:58:39.704114) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7570231 bytes
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:58:39.705777) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 106.5 rd, 82.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.7 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(9.9) write-amplify(4.3) OK, records in: 5026, records dropped: 519 output_compression: NoCompression
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:58:39.705810) EVENT_LOG_v1 {"time_micros": 1769842719705799, "job": 10, "event": "compaction_finished", "compaction_time_micros": 92250, "compaction_time_cpu_micros": 13532, "output_level": 6, "num_output_files": 1, "total_output_size": 7570231, "num_input_records": 5026, "num_output_records": 4507, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842719706339, "job": 10, "event": "table_file_deletion", "file_number": 23}
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842719707299, "job": 10, "event": "table_file_deletion", "file_number": 21}
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:58:39.611605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:58:39.707500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:58:39.707508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:58:39.707510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:58:39.707512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:58:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-06:58:39.707514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:58:40 compute-1 sudo[129518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amyxxqvricghhznookeztwkedncekyzi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769842719.7587526-1647-277258636095732/AnsiballZ_edpm_container_manage.py'
Jan 31 06:58:40 compute-1 sudo[129518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:40.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:40 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 459 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:58:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:40.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:40 compute-1 python3[129520]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 06:58:41 compute-1 ceph-mon[81728]: pgmap v437: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:41 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:42.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:42.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:43 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:58:43 compute-1 ceph-mon[81728]: pgmap v438: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:43 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:44.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:44.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:44 compute-1 podman[129533]: 2026-01-31 06:58:44.876024913 +0000 UTC m=+4.340823714 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 06:58:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:44 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 464 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:58:44 compute-1 ceph-mon[81728]: pgmap v439: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:44 compute-1 podman[129651]: 2026-01-31 06:58:44.984439146 +0000 UTC m=+0.038036070 container create 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 31 06:58:44 compute-1 podman[129651]: 2026-01-31 06:58:44.961938183 +0000 UTC m=+0.015535127 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 06:58:44 compute-1 python3[129520]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 06:58:45 compute-1 sudo[129518]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:45 compute-1 sudo[129839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rymppgfortnwkllqxkjlpjhbnkucidql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842725.2384582-1671-97226056541030/AnsiballZ_stat.py'
Jan 31 06:58:45 compute-1 sudo[129839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:45 compute-1 python3.9[129841]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:58:45 compute-1 sudo[129839]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:46.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:46.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:46 compute-1 sudo[129891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:58:46 compute-1 sudo[129891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:58:46 compute-1 sudo[129891]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:46 compute-1 sudo[129945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:58:46 compute-1 sudo[129945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:58:46 compute-1 sudo[129945]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:46 compute-1 sudo[130043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsedlfphksuomyjwlpktkonfxogjhtae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842726.3840437-1698-265776988911383/AnsiballZ_file.py'
Jan 31 06:58:46 compute-1 sudo[130043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:46 compute-1 python3.9[130045]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:46 compute-1 sudo[130043]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:46 compute-1 sudo[130119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhamwuzjbapancvoyrwdprtsjfbofiup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842726.3840437-1698-265776988911383/AnsiballZ_stat.py'
Jan 31 06:58:46 compute-1 sudo[130119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:47 compute-1 python3.9[130121]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:58:47 compute-1 sudo[130119]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:47 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:47 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:58:47 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:58:47 compute-1 ceph-mon[81728]: pgmap v440: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:47 compute-1 sudo[130270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujegzydgsenkqyhxmpnnpjuucckgrkcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842727.3458245-1698-111648582746687/AnsiballZ_copy.py'
Jan 31 06:58:47 compute-1 sudo[130270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:47 compute-1 python3.9[130272]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769842727.3458245-1698-111648582746687/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:47 compute-1 sudo[130270]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:48 compute-1 sudo[130346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oacwviirenvzlphdfzdvkzdbcqxhdwvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842727.3458245-1698-111648582746687/AnsiballZ_systemd.py'
Jan 31 06:58:48 compute-1 sudo[130346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:48.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:48.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:48 compute-1 python3.9[130348]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 06:58:48 compute-1 systemd[1]: Reloading.
Jan 31 06:58:48 compute-1 systemd-sysv-generator[130382]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:58:48 compute-1 systemd-rc-local-generator[130379]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:58:48 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:48 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:48 compute-1 sudo[130346]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:49 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:58:49 compute-1 sudo[130460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjlwusavvsjcxlfbiigrhrnhlzyocxbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842727.3458245-1698-111648582746687/AnsiballZ_systemd.py'
Jan 31 06:58:49 compute-1 sudo[130460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:49 compute-1 python3.9[130462]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:58:49 compute-1 systemd[1]: Reloading.
Jan 31 06:58:49 compute-1 systemd-rc-local-generator[130485]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:58:49 compute-1 systemd-sysv-generator[130493]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:58:50 compute-1 ceph-mon[81728]: pgmap v441: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:50 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 469 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:58:50 compute-1 systemd[1]: Starting ovn_controller container...
Jan 31 06:58:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:50.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:50 compute-1 systemd[1]: Started libcrun container.
Jan 31 06:58:50 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20507a3e311db2508a64045aee7203573f5c23a8e4f542dbab7f29c252bfaa5b/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 31 06:58:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:50.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:50 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1.
Jan 31 06:58:50 compute-1 podman[130504]: 2026-01-31 06:58:50.431461831 +0000 UTC m=+0.274413158 container init 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 06:58:50 compute-1 ovn_controller[130520]: + sudo -E kolla_set_configs
Jan 31 06:58:50 compute-1 podman[130504]: 2026-01-31 06:58:50.46019179 +0000 UTC m=+0.303143097 container start 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 31 06:58:50 compute-1 edpm-start-podman-container[130504]: ovn_controller
Jan 31 06:58:50 compute-1 systemd[1]: Created slice User Slice of UID 0.
Jan 31 06:58:50 compute-1 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 31 06:58:50 compute-1 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 31 06:58:50 compute-1 systemd[1]: Starting User Manager for UID 0...
Jan 31 06:58:50 compute-1 edpm-start-podman-container[130503]: Creating additional drop-in dependency for "ovn_controller" (572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1)
Jan 31 06:58:50 compute-1 systemd[130557]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 31 06:58:50 compute-1 systemd[1]: Reloading.
Jan 31 06:58:50 compute-1 podman[130527]: 2026-01-31 06:58:50.568706855 +0000 UTC m=+0.097007318 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:58:50 compute-1 systemd-rc-local-generator[130604]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:58:50 compute-1 systemd-sysv-generator[130610]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:58:50 compute-1 systemd[130557]: Queued start job for default target Main User Target.
Jan 31 06:58:50 compute-1 systemd[130557]: Created slice User Application Slice.
Jan 31 06:58:50 compute-1 systemd[130557]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 31 06:58:50 compute-1 systemd[130557]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 06:58:50 compute-1 systemd[130557]: Reached target Paths.
Jan 31 06:58:50 compute-1 systemd[130557]: Reached target Timers.
Jan 31 06:58:50 compute-1 systemd[130557]: Starting D-Bus User Message Bus Socket...
Jan 31 06:58:50 compute-1 systemd[130557]: Starting Create User's Volatile Files and Directories...
Jan 31 06:58:50 compute-1 systemd[130557]: Finished Create User's Volatile Files and Directories.
Jan 31 06:58:50 compute-1 systemd[130557]: Listening on D-Bus User Message Bus Socket.
Jan 31 06:58:50 compute-1 systemd[130557]: Reached target Sockets.
Jan 31 06:58:50 compute-1 systemd[130557]: Reached target Basic System.
Jan 31 06:58:50 compute-1 systemd[130557]: Reached target Main User Target.
Jan 31 06:58:50 compute-1 systemd[130557]: Startup finished in 115ms.
Jan 31 06:58:50 compute-1 systemd[1]: Started User Manager for UID 0.
Jan 31 06:58:50 compute-1 systemd[1]: Started ovn_controller container.
Jan 31 06:58:50 compute-1 systemd[1]: 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1-3caee788933776b8.service: Main process exited, code=exited, status=1/FAILURE
Jan 31 06:58:50 compute-1 systemd[1]: 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1-3caee788933776b8.service: Failed with result 'exit-code'.
Jan 31 06:58:50 compute-1 systemd[1]: Started Session c1 of User root.
Jan 31 06:58:50 compute-1 sudo[130460]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:50 compute-1 ovn_controller[130520]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 06:58:50 compute-1 ovn_controller[130520]: INFO:__main__:Validating config file
Jan 31 06:58:50 compute-1 ovn_controller[130520]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 06:58:50 compute-1 ovn_controller[130520]: INFO:__main__:Writing out command to execute
Jan 31 06:58:50 compute-1 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 31 06:58:50 compute-1 ovn_controller[130520]: ++ cat /run_command
Jan 31 06:58:50 compute-1 ovn_controller[130520]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 06:58:50 compute-1 ovn_controller[130520]: + ARGS=
Jan 31 06:58:50 compute-1 ovn_controller[130520]: + sudo kolla_copy_cacerts
Jan 31 06:58:50 compute-1 systemd[1]: Started Session c2 of User root.
Jan 31 06:58:50 compute-1 ovn_controller[130520]: + [[ ! -n '' ]]
Jan 31 06:58:50 compute-1 ovn_controller[130520]: + . kolla_extend_start
Jan 31 06:58:50 compute-1 ovn_controller[130520]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 06:58:50 compute-1 ovn_controller[130520]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 31 06:58:50 compute-1 ovn_controller[130520]: + umask 0022
Jan 31 06:58:50 compute-1 ovn_controller[130520]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 31 06:58:50 compute-1 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 31 06:58:50 compute-1 NetworkManager[49028]: <info>  [1769842730.9478] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 31 06:58:50 compute-1 NetworkManager[49028]: <info>  [1769842730.9485] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 06:58:50 compute-1 NetworkManager[49028]: <warn>  [1769842730.9487] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 06:58:50 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 06:58:50 compute-1 NetworkManager[49028]: <info>  [1769842730.9497] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 31 06:58:50 compute-1 NetworkManager[49028]: <info>  [1769842730.9501] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 31 06:58:50 compute-1 NetworkManager[49028]: <info>  [1769842730.9504] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 06:58:50 compute-1 kernel: br-int: entered promiscuous mode
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 06:58:50 compute-1 ovn_controller[130520]: 2026-01-31T06:58:50Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 06:58:50 compute-1 NetworkManager[49028]: <info>  [1769842730.9664] manager: (ovn-e3f377-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 31 06:58:50 compute-1 NetworkManager[49028]: <info>  [1769842730.9672] manager: (ovn-facc7c-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Jan 31 06:58:50 compute-1 NetworkManager[49028]: <info>  [1769842730.9676] manager: (ovn-5c7f3d-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Jan 31 06:58:50 compute-1 systemd-udevd[130656]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 06:58:50 compute-1 systemd-udevd[130657]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 06:58:50 compute-1 kernel: genev_sys_6081: entered promiscuous mode
Jan 31 06:58:50 compute-1 NetworkManager[49028]: <info>  [1769842730.9808] device (genev_sys_6081): carrier: link connected
Jan 31 06:58:50 compute-1 NetworkManager[49028]: <info>  [1769842730.9810] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Jan 31 06:58:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:50 compute-1 ceph-mon[81728]: pgmap v442: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:51 compute-1 python3.9[130787]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 06:58:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:52.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:52.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:53 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:54 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:58:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:54.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:54.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:54 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:54 compute-1 ceph-mon[81728]: pgmap v443: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:54 compute-1 sudo[130937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewyyuqpriaujalndutpvlgihrbjaljkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842734.5043697-1833-247351281154462/AnsiballZ_stat.py'
Jan 31 06:58:54 compute-1 sudo[130937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:54 compute-1 python3.9[130939]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:58:54 compute-1 sudo[130937]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:55 compute-1 sudo[131060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdpbaddhnfiyrgmparqbubwouyibiina ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842734.5043697-1833-247351281154462/AnsiballZ_copy.py'
Jan 31 06:58:55 compute-1 sudo[131060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:55 compute-1 python3.9[131062]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842734.5043697-1833-247351281154462/.source.yaml _original_basename=.50s6w0yf follow=False checksum=3e5620720bba2617b7a3787a2b0e7617152eaa46 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:55 compute-1 sudo[131060]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:55 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 474 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:58:55 compute-1 ceph-mon[81728]: pgmap v444: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:55 compute-1 sudo[131212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrxynyyaexcnwjmxlgfnbzdvonjufpmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842735.6062114-1878-158173017695233/AnsiballZ_command.py'
Jan 31 06:58:55 compute-1 sudo[131212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:56 compute-1 python3.9[131214]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:58:56 compute-1 ovs-vsctl[131215]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 31 06:58:56 compute-1 sudo[131212]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:56.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:58:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:56.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:58:56 compute-1 sudo[131365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zysmurpznsmiookcovpamaatrqrwmnbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842736.2669654-1902-134584496886833/AnsiballZ_command.py'
Jan 31 06:58:56 compute-1 sudo[131365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:56 compute-1 python3.9[131367]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:58:56 compute-1 ovs-vsctl[131369]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 31 06:58:56 compute-1 sudo[131365]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:58:56 compute-1 ceph-mon[81728]: pgmap v445: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:58:57 compute-1 sudo[131520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjxewwcvdcuxzqgwglkykhketpwjqtig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842737.2272933-1944-53334878293945/AnsiballZ_command.py'
Jan 31 06:58:57 compute-1 sudo[131520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:57 compute-1 python3.9[131522]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:58:57 compute-1 ovs-vsctl[131523]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 31 06:58:57 compute-1 sudo[131520]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:58 compute-1 sshd-session[119638]: Connection closed by 192.168.122.30 port 58112
Jan 31 06:58:58 compute-1 sshd-session[119635]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:58:58 compute-1 systemd[1]: session-45.scope: Deactivated successfully.
Jan 31 06:58:58 compute-1 systemd[1]: session-45.scope: Consumed 46.435s CPU time.
Jan 31 06:58:58 compute-1 systemd-logind[788]: Session 45 logged out. Waiting for processes to exit.
Jan 31 06:58:58 compute-1 systemd-logind[788]: Removed session 45.
Jan 31 06:58:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:58:58.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:58:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:58:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:58:58.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:58:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:00 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:59:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:00 compute-1 ceph-mon[81728]: pgmap v446: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:00 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 479 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:59:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:59:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:00.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:59:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:00.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:01 compute-1 systemd[1]: Stopping User Manager for UID 0...
Jan 31 06:59:01 compute-1 systemd[130557]: Activating special unit Exit the Session...
Jan 31 06:59:01 compute-1 systemd[130557]: Stopped target Main User Target.
Jan 31 06:59:01 compute-1 systemd[130557]: Stopped target Basic System.
Jan 31 06:59:01 compute-1 systemd[130557]: Stopped target Paths.
Jan 31 06:59:01 compute-1 systemd[130557]: Stopped target Sockets.
Jan 31 06:59:01 compute-1 systemd[130557]: Stopped target Timers.
Jan 31 06:59:01 compute-1 systemd[130557]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 06:59:01 compute-1 systemd[130557]: Closed D-Bus User Message Bus Socket.
Jan 31 06:59:01 compute-1 systemd[130557]: Stopped Create User's Volatile Files and Directories.
Jan 31 06:59:01 compute-1 systemd[130557]: Removed slice User Application Slice.
Jan 31 06:59:01 compute-1 systemd[130557]: Reached target Shutdown.
Jan 31 06:59:01 compute-1 systemd[130557]: Finished Exit the Session.
Jan 31 06:59:01 compute-1 systemd[130557]: Reached target Exit the Session.
Jan 31 06:59:01 compute-1 systemd[1]: user@0.service: Deactivated successfully.
Jan 31 06:59:01 compute-1 systemd[1]: Stopped User Manager for UID 0.
Jan 31 06:59:01 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 31 06:59:01 compute-1 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 31 06:59:01 compute-1 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 31 06:59:01 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 31 06:59:01 compute-1 systemd[1]: Removed slice User Slice of UID 0.
Jan 31 06:59:01 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:01 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:01 compute-1 ceph-mon[81728]: pgmap v447: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:02.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:02.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:02 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:04.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:04.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:04 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:04 compute-1 ceph-mon[81728]: pgmap v448: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:04 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:05 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:59:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:06.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:06.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:06 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 484 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:59:06 compute-1 ceph-mon[81728]: pgmap v449: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:06 compute-1 sshd-session[131550]: Accepted publickey for zuul from 192.168.122.30 port 41218 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 06:59:06 compute-1 systemd-logind[788]: New session 47 of user zuul.
Jan 31 06:59:06 compute-1 systemd[1]: Started Session 47 of User zuul.
Jan 31 06:59:06 compute-1 sshd-session[131550]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:59:07 compute-1 python3.9[131703]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:59:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:07 compute-1 ceph-mon[81728]: pgmap v450: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:59:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:08.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:59:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:08.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:08 compute-1 sudo[131857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apoyobgubxkhhzbsuvlgdovkgtpkbbzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842748.3857498-63-224979238245585/AnsiballZ_file.py'
Jan 31 06:59:08 compute-1 sudo[131857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:08 compute-1 python3.9[131859]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:08 compute-1 sudo[131857]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:09 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:09 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:09 compute-1 ceph-mon[81728]: pgmap v451: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:09 compute-1 sudo[132010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyobhwatayqsquzmhzydsyntclirwhhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842749.06076-63-84201692361435/AnsiballZ_file.py'
Jan 31 06:59:09 compute-1 sudo[132010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:09 compute-1 python3.9[132012]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:09 compute-1 sudo[132010]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:09 compute-1 ceph-osd[79145]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:59:09 compute-1 ceph-osd[79145]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5888 writes, 25K keys, 5888 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5888 writes, 867 syncs, 6.79 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5888 writes, 25K keys, 5888 commit groups, 1.0 writes per commit group, ingest: 18.81 MB, 0.03 MB/s
                                           Interval WAL: 5888 writes, 867 syncs, 6.79 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 06:59:09 compute-1 sudo[132162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyahkpyyuwsggzgnhhnulqkdqawsnmkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842749.5854757-63-156795879205022/AnsiballZ_file.py'
Jan 31 06:59:09 compute-1 sudo[132162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:09 compute-1 python3.9[132164]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:10 compute-1 sudo[132162]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:10 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:59:10 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:10 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 489 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:59:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:10.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:10 compute-1 sudo[132315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpehmupnjcqkovlizjlzdgohdiluduyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842750.1257765-63-269100607693143/AnsiballZ_file.py'
Jan 31 06:59:10 compute-1 sudo[132315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:59:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:10.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:59:10 compute-1 python3.9[132317]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:10 compute-1 sudo[132315]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:10 compute-1 sudo[132467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efryojogxfcsxvxblwdgdlinbbqcylqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842750.6367915-63-55867184124620/AnsiballZ_file.py'
Jan 31 06:59:10 compute-1 sudo[132467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:11 compute-1 python3.9[132469]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:11 compute-1 sudo[132467]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:11 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:11 compute-1 ceph-mon[81728]: pgmap v452: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:12 compute-1 python3.9[132619]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:59:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:12.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:12.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:12 compute-1 sudo[132769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aluwrawcuqaggyfkyaevyhitqhbpyjyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842752.4579022-195-259232043836975/AnsiballZ_seboolean.py'
Jan 31 06:59:12 compute-1 sudo[132769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:13 compute-1 python3.9[132771]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 06:59:13 compute-1 sudo[132769]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:13 compute-1 ceph-mon[81728]: pgmap v453: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:14 compute-1 python3.9[132921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:59:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:14.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:14.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:14 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 494 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:59:14 compute-1 ceph-mon[81728]: pgmap v454: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:14 compute-1 python3.9[133042]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842753.7389717-219-7362172379083/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:15 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:59:15 compute-1 python3.9[133192]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:59:15 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:15 compute-1 python3.9[133313]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842755.0639594-264-186620261918825/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:16.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:16.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:16 compute-1 sudo[133463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpoylmzzymhfejovpxpsrzgcboaashkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842756.324956-315-213427802485520/AnsiballZ_setup.py'
Jan 31 06:59:16 compute-1 sudo[133463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:16 compute-1 python3.9[133465]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:59:17 compute-1 sudo[133463]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:17 compute-1 sudo[133548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxmlnzpgskcduwceneuuxcciygoktdem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842756.324956-315-213427802485520/AnsiballZ_dnf.py'
Jan 31 06:59:17 compute-1 sudo[133548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:59:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:18.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:59:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:18.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:18 compute-1 python3.9[133550]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:59:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:20.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:20.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:21 compute-1 ovn_controller[130520]: 2026-01-31T06:59:21Z|00025|memory|INFO|17280 kB peak resident set size after 30.2 seconds
Jan 31 06:59:21 compute-1 ovn_controller[130520]: 2026-01-31T06:59:21Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Jan 31 06:59:21 compute-1 podman[133552]: 2026-01-31 06:59:21.164773369 +0000 UTC m=+0.083037514 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 06:59:21 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:21 compute-1 ceph-mon[81728]: pgmap v455: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:22 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:59:22 compute-1 sudo[133548]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:22.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:22 compute-1 ceph-mon[81728]: pgmap v456: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:22 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 499 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:59:22 compute-1 ceph-mon[81728]: pgmap v457: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:22.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:22 compute-1 sudo[133727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zayancggavupmlzxqrdafvpplpuxksah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842762.2292943-351-174637561581286/AnsiballZ_systemd.py'
Jan 31 06:59:22 compute-1 sudo[133727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:22 compute-1 python3.9[133729]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 06:59:23 compute-1 sudo[133727]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:23 compute-1 ceph-mon[81728]: pgmap v458: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:24.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:59:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:24.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:59:24 compute-1 python3.9[133882]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:59:25 compute-1 python3.9[134003]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842764.288645-375-267026204124652/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:25 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 504 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:59:25 compute-1 ceph-mon[81728]: pgmap v459: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:25 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:25 compute-1 python3.9[134153]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:59:26 compute-1 python3.9[134274]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842765.2281444-375-30867933935223/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:26.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:26.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:26 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:27 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:59:27 compute-1 python3.9[134424]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:59:27 compute-1 ceph-mon[81728]: pgmap v460: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:27 compute-1 python3.9[134545]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842767.049627-507-7309572014463/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:28 compute-1 python3.9[134695]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:59:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:59:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:28.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:59:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:28.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:28 compute-1 python3.9[134816]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842767.970229-507-140665362137060/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:29 compute-1 python3.9[134966]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:59:29 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:29 compute-1 ceph-mon[81728]: pgmap v461: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:29 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 509 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:59:29 compute-1 sudo[135118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbwvbmfbqenmtomhdzyhgeqtflwxutxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842769.7554543-621-149763168160801/AnsiballZ_file.py'
Jan 31 06:59:29 compute-1 sudo[135118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:30 compute-1 python3.9[135120]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:30 compute-1 sudo[135118]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:30.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:30.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:30 compute-1 sudo[135270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxoszseznupubzypaycyifbcowjpooyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842770.3065283-645-90540202606204/AnsiballZ_stat.py'
Jan 31 06:59:30 compute-1 sudo[135270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:30 compute-1 ceph-mon[81728]: pgmap v462: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:30 compute-1 python3.9[135272]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:59:30 compute-1 sudo[135270]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:31 compute-1 sudo[135348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxmtawllntutlxxryffedivsiyposrjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842770.3065283-645-90540202606204/AnsiballZ_file.py'
Jan 31 06:59:31 compute-1 sudo[135348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:31 compute-1 python3.9[135350]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:31 compute-1 sudo[135348]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:31 compute-1 sudo[135500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iekuccapiodumidqasyklsyshygcrymy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842771.3169384-645-154799717392670/AnsiballZ_stat.py'
Jan 31 06:59:31 compute-1 sudo[135500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:31 compute-1 python3.9[135502]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:59:31 compute-1 sudo[135500]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:31 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:31 compute-1 sudo[135578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpvtfbafymzojxymjffmftouxgmazkxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842771.3169384-645-154799717392670/AnsiballZ_file.py'
Jan 31 06:59:31 compute-1 sudo[135578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:32 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:59:32 compute-1 python3.9[135580]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:32 compute-1 sudo[135578]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:32.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:32 compute-1 sudo[135730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtrhcziststdgzftrirgwmnauevvmspw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842772.22165-714-82689246961438/AnsiballZ_file.py'
Jan 31 06:59:32 compute-1 sudo[135730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:32.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:32 compute-1 python3.9[135732]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:59:32 compute-1 sudo[135730]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:32 compute-1 ceph-mon[81728]: pgmap v463: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:33 compute-1 sudo[135882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqpqqjwqpkrmiwmmquujttaubshboitw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842772.8887591-738-42302253505807/AnsiballZ_stat.py'
Jan 31 06:59:33 compute-1 sudo[135882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:33 compute-1 python3.9[135884]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:59:33 compute-1 sudo[135882]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:33 compute-1 sudo[135960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ochzmsfrknjeizpjnobbjgvdvhvhjjfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842772.8887591-738-42302253505807/AnsiballZ_file.py'
Jan 31 06:59:33 compute-1 sudo[135960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:33 compute-1 python3.9[135962]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:59:33 compute-1 sudo[135960]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:34 compute-1 sudo[136112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qckkoqtutakelmievpagyeddcxztcypg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842773.93637-774-129416168852993/AnsiballZ_stat.py'
Jan 31 06:59:34 compute-1 sudo[136112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:34 compute-1 python3.9[136114]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:59:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:34.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:34 compute-1 sudo[136112]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:34.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:34 compute-1 sudo[136190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xntulsbigafjizdgsomfgfmxdybvydrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842773.93637-774-129416168852993/AnsiballZ_file.py'
Jan 31 06:59:34 compute-1 sudo[136190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:34 compute-1 python3.9[136192]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:59:34 compute-1 sudo[136190]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:34 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:59:34 compute-1 ceph-mon[81728]: pgmap v464: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:35 compute-1 sudo[136342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feoihstlgmbfkabmipkrdjbsihvtkuxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842775.0145895-810-258273113473517/AnsiballZ_systemd.py'
Jan 31 06:59:35 compute-1 sudo[136342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:35 compute-1 python3.9[136344]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:59:35 compute-1 systemd[1]: Reloading.
Jan 31 06:59:35 compute-1 systemd-sysv-generator[136369]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:59:35 compute-1 systemd-rc-local-generator[136364]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:59:35 compute-1 sudo[136342]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:36 compute-1 sudo[136532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsseqvkfdpvstsjngmfknjfhmftkhtwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842776.137756-834-280129305404748/AnsiballZ_stat.py'
Jan 31 06:59:36 compute-1 sudo[136532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:36.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:36.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:36 compute-1 python3.9[136534]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:59:36 compute-1 sudo[136532]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:36 compute-1 sudo[136610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omtjgkpffrrpuerbtlpgfszxclukxtkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842776.137756-834-280129305404748/AnsiballZ_file.py'
Jan 31 06:59:36 compute-1 sudo[136610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:37 compute-1 python3.9[136612]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:59:37 compute-1 sudo[136610]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:37 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:59:37 compute-1 sudo[136762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgtsunwpzrubmvwsmokjaylmuacyovid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842777.2326238-870-120135172040290/AnsiballZ_stat.py'
Jan 31 06:59:37 compute-1 sudo[136762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:37 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:37 compute-1 ceph-mon[81728]: pgmap v465: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:37 compute-1 python3.9[136764]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:59:37 compute-1 sudo[136762]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:37 compute-1 sudo[136840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bubwemashsphmmohglpjhnstsakhvtpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842777.2326238-870-120135172040290/AnsiballZ_file.py'
Jan 31 06:59:37 compute-1 sudo[136840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:38 compute-1 python3.9[136842]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:59:38 compute-1 sudo[136840]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:59:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:38.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:59:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:59:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:38.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:59:38 compute-1 sudo[136992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhuqfwhvbljgfuqxmjgtzmsngeuapxeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842778.3141603-906-13002383375852/AnsiballZ_systemd.py'
Jan 31 06:59:38 compute-1 sudo[136992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:38 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:38 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:38 compute-1 python3.9[136994]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:59:38 compute-1 systemd[1]: Reloading.
Jan 31 06:59:38 compute-1 systemd-rc-local-generator[137020]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:59:38 compute-1 systemd-sysv-generator[137023]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:59:39 compute-1 systemd[1]: Starting Create netns directory...
Jan 31 06:59:39 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 06:59:39 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 06:59:39 compute-1 systemd[1]: Finished Create netns directory.
Jan 31 06:59:39 compute-1 sudo[136992]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:39 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:39 compute-1 ceph-mon[81728]: pgmap v466: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:39 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:59:39 compute-1 sudo[137187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgirjpfptboezavqkputrnehrkebnqsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842779.5766103-936-24094989643983/AnsiballZ_file.py'
Jan 31 06:59:39 compute-1 sudo[137187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:40 compute-1 python3.9[137189]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:40 compute-1 sudo[137187]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:40.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:40.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:40 compute-1 sudo[137339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgvotnbeaifhfpzsblvspvvjardqbwnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842780.2939613-960-165861307009001/AnsiballZ_stat.py'
Jan 31 06:59:40 compute-1 sudo[137339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:40 compute-1 python3.9[137341]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:59:40 compute-1 sudo[137339]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:40 compute-1 ceph-mon[81728]: pgmap v467: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:41 compute-1 sudo[137462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbjebdzlwrwxnvopswtjjqwjxtvvrvww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842780.2939613-960-165861307009001/AnsiballZ_copy.py'
Jan 31 06:59:41 compute-1 sudo[137462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:41 compute-1 python3.9[137464]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842780.2939613-960-165861307009001/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:41 compute-1 sudo[137462]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:42 compute-1 sudo[137614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jevakcpvplocqzvsghpkcaqusdiwmxuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842781.8179765-1011-213382607349922/AnsiballZ_file.py'
Jan 31 06:59:42 compute-1 sudo[137614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:42 compute-1 python3.9[137616]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:59:42 compute-1 sudo[137614]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:42.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 06:59:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:42.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 06:59:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:42 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:59:42 compute-1 sudo[137766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sltjkgfbddtgolfxcwbkxfibyahpwxiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842782.6308942-1035-258296304985099/AnsiballZ_file.py'
Jan 31 06:59:42 compute-1 sudo[137766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:43 compute-1 python3.9[137768]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:43 compute-1 sudo[137766]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:43 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:43 compute-1 ceph-mon[81728]: pgmap v468: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:43 compute-1 sudo[137918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtsaisscxakksprosgjtnsvyozygeggc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842783.4401317-1059-279106708642815/AnsiballZ_stat.py'
Jan 31 06:59:43 compute-1 sudo[137918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:43 compute-1 python3.9[137920]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:59:43 compute-1 sudo[137918]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:44 compute-1 sudo[138041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fucqdfkxryrdekrzbkvhobbkefzdtyng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842783.4401317-1059-279106708642815/AnsiballZ_copy.py'
Jan 31 06:59:44 compute-1 sudo[138041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:44 compute-1 python3.9[138043]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842783.4401317-1059-279106708642815/.source.json _original_basename=.s8fv_nse follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:59:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:44.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:44 compute-1 sudo[138041]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:59:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:44.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:59:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:45 compute-1 python3.9[138193]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:59:45 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:59:45 compute-1 ceph-mon[81728]: pgmap v469: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:45 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:46.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:46.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:46 compute-1 sudo[138489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:59:46 compute-1 sudo[138489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:59:46 compute-1 sudo[138489]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:46 compute-1 ceph-mon[81728]: pgmap v470: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:46 compute-1 sudo[138514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:59:46 compute-1 sudo[138514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:59:46 compute-1 sudo[138514]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:46 compute-1 sudo[138539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 06:59:46 compute-1 sudo[138539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:59:46 compute-1 sudo[138539]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:46 compute-1 sudo[138572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 06:59:46 compute-1 sudo[138572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:59:47 compute-1 sudo[138572]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:47 compute-1 sudo[138746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdwbaenixzioduzuvwsxkceyhqbwyoql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842786.7547066-1179-277091613937344/AnsiballZ_container_config_data.py'
Jan 31 06:59:47 compute-1 sudo[138746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:47 compute-1 python3.9[138748]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 31 06:59:47 compute-1 sudo[138746]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:47 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:59:48 compute-1 sudo[138898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghryztocmxxorgqisjxsycypkmayxbtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842787.7086413-1212-141404145115225/AnsiballZ_container_config_hash.py'
Jan 31 06:59:48 compute-1 sudo[138898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:48 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:59:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:59:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 06:59:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 31 06:59:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 06:59:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:59:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 06:59:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 06:59:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 06:59:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 06:59:48 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 06:59:48 compute-1 python3.9[138900]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 06:59:48 compute-1 sudo[138898]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:48.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:48.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:49 compute-1 sudo[139050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ullhyzsryjjvrbopqkthaybgcgmybqxw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769842788.6370747-1242-149164269981394/AnsiballZ_edpm_container_manage.py'
Jan 31 06:59:49 compute-1 sudo[139050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:49 compute-1 python3[139052]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 06:59:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:49 compute-1 ceph-mon[81728]: pgmap v471: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:50.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:59:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:50.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:59:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:50 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 529 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:59:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:50 compute-1 ceph-mon[81728]: pgmap v472: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:52 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:52.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:52.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:52 compute-1 podman[139112]: 2026-01-31 06:59:52.578564206 +0000 UTC m=+0.497434798 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 06:59:52 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:59:53 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:59:53 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 2245 writes, 13K keys, 2245 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 2245 writes, 2245 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2245 writes, 13K keys, 2245 commit groups, 1.0 writes per commit group, ingest: 24.03 MB, 0.04 MB/s
                                           Interval WAL: 2245 writes, 2245 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     49.3      0.27              0.03         5    0.054       0      0       0.0       0.0
                                             L6      1/0    7.22 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4     81.5     68.3      0.47              0.06         4    0.116     18K   1790       0.0       0.0
                                            Sum      1/0    7.22 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4     51.5     61.3      0.74              0.09         9    0.082     18K   1790       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4     52.3     62.3      0.72              0.09         8    0.091     18K   1790       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     81.5     68.3      0.47              0.06         4    0.116     18K   1790       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     51.6      0.26              0.03         4    0.065       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.013, interval 0.013
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.7 seconds
                                           Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9bf1831f0#2 capacity: 308.00 MB usage: 1.27 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000106 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(62,1.10 MB,0.35657%) FilterBlock(9,62.11 KB,0.0196928%) IndexBlock(9,115.98 KB,0.0367747%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 06:59:53 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:53 compute-1 ceph-mon[81728]: pgmap v473: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:59:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:54.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:59:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:59:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:54.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:59:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 06:59:56 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 534 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 06:59:56 compute-1 ceph-mon[81728]: pgmap v474: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 06:59:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 06:59:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:56.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 06:59:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:56.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:57 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:59:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:06:59:58.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 06:59:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 06:59:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 06:59:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:06:59:58.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:00.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:00.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:01 compute-1 ceph-mon[81728]: pgmap v475: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:02.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:02.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:04.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:04.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:06 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:00:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:00:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:06.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:00:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:06.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:06 compute-1 ceph-mds[84120]: mds.beacon.cephfs.compute-1.hhzmle missed beacon ack from the monitors
Jan 31 07:00:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:08.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:08.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:00:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:10.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:00:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:10.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:12 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:00:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:12.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:12.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:13 compute-1 ceph-mon[81728]: pgmap v476: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:13 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 539 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:00:13 compute-1 ceph-mon[81728]: Health detail: HEALTH_WARN 1 slow ops, oldest one blocked for 539 sec, osd.2 has slow ops
Jan 31 07:00:13 compute-1 ceph-mon[81728]: [WRN] SLOW_OPS: 1 slow ops, oldest one blocked for 539 sec, osd.2 has slow ops
Jan 31 07:00:13 compute-1 ceph-mon[81728]: pgmap v477: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:13 compute-1 ceph-mon[81728]: pgmap v478: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:13 compute-1 ceph-mon[81728]: pgmap v479: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:13 compute-1 ceph-mon[81728]: pgmap v480: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:13 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 549 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:00:13 compute-1 ceph-mon[81728]: pgmap v481: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:13 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:00:13 compute-1 ceph-mon[81728]: pgmap v482: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:13 compute-1 podman[139063]: 2026-01-31 07:00:13.811709369 +0000 UTC m=+24.351393165 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:00:13 compute-1 podman[139214]: 2026-01-31 07:00:13.925318254 +0000 UTC m=+0.046472570 container create 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:00:13 compute-1 podman[139214]: 2026-01-31 07:00:13.897835486 +0000 UTC m=+0.018989832 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:00:13 compute-1 python3[139052]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:00:13 compute-1 sudo[139238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:00:14 compute-1 sudo[139238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:00:14 compute-1 sudo[139238]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:14 compute-1 sudo[139050]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:14 compute-1 sudo[139277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:00:14 compute-1 sudo[139277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:00:14 compute-1 sudo[139277]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:14 compute-1 sudo[139451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aojmuvpxyrdrfnqlyovnonxgxnxoerqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842814.2169445-1266-178710586753343/AnsiballZ_stat.py'
Jan 31 07:00:14 compute-1 sudo[139451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:14.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:14.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:14 compute-1 python3.9[139453]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:00:14 compute-1 sudo[139451]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:14 compute-1 ceph-mon[81728]: pgmap v483: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:14 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:00:14 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 554 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:00:14 compute-1 ceph-mon[81728]: pgmap v484: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:15 compute-1 sudo[139605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoxfpxunkloacjzklxlhwgwsedubnlfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842814.9287727-1293-26131172751048/AnsiballZ_file.py'
Jan 31 07:00:15 compute-1 sudo[139605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:15 compute-1 python3.9[139607]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:15 compute-1 sudo[139605]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:15 compute-1 sudo[139681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqraxwqicjpikkoicrjblugcnrgvfhgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842814.9287727-1293-26131172751048/AnsiballZ_stat.py'
Jan 31 07:00:15 compute-1 sudo[139681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:15 compute-1 python3.9[139683]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:00:15 compute-1 sudo[139681]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:15 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:16 compute-1 sudo[139832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffptvypxlhsodwhskahmzzukdhnzmaie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842815.7645822-1293-238333389535679/AnsiballZ_copy.py'
Jan 31 07:00:16 compute-1 sudo[139832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:16 compute-1 python3.9[139834]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769842815.7645822-1293-238333389535679/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:16 compute-1 sudo[139832]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:16.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:16 compute-1 sudo[139908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtrythzkiutdeohyhgxzevipneaixrmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842815.7645822-1293-238333389535679/AnsiballZ_systemd.py'
Jan 31 07:00:16 compute-1 sudo[139908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:00:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:16.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:00:16 compute-1 python3.9[139910]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:00:16 compute-1 systemd[1]: Reloading.
Jan 31 07:00:16 compute-1 systemd-rc-local-generator[139938]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:00:16 compute-1 systemd-sysv-generator[139941]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:00:16 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:16 compute-1 ceph-mon[81728]: pgmap v485: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:17 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:00:17 compute-1 sudo[139908]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:17 compute-1 sudo[140020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbpjepqbudsulnazedlprnnmymiwapqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842815.7645822-1293-238333389535679/AnsiballZ_systemd.py'
Jan 31 07:00:17 compute-1 sudo[140020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:17 compute-1 python3.9[140022]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:00:17 compute-1 systemd[1]: Reloading.
Jan 31 07:00:17 compute-1 systemd-rc-local-generator[140047]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:00:17 compute-1 systemd-sysv-generator[140052]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:00:17 compute-1 systemd[1]: Starting ovn_metadata_agent container...
Jan 31 07:00:17 compute-1 systemd[1]: Started libcrun container.
Jan 31 07:00:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135273c26f8f1b9703544685a2096129c8296d6e0c67aa3d23dc225440dee472/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 31 07:00:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135273c26f8f1b9703544685a2096129c8296d6e0c67aa3d23dc225440dee472/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:00:17 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06.
Jan 31 07:00:17 compute-1 podman[140063]: 2026-01-31 07:00:17.876074936 +0000 UTC m=+0.103142033 container init 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: + sudo -E kolla_set_configs
Jan 31 07:00:17 compute-1 podman[140063]: 2026-01-31 07:00:17.897605098 +0000 UTC m=+0.124672175 container start 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Jan 31 07:00:17 compute-1 edpm-start-podman-container[140063]: ovn_metadata_agent
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: INFO:__main__:Validating config file
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: INFO:__main__:Copying service configuration files
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: INFO:__main__:Writing out command to execute
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 31 07:00:17 compute-1 edpm-start-podman-container[140062]: Creating additional drop-in dependency for "ovn_metadata_agent" (671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06)
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: ++ cat /run_command
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: + CMD=neutron-ovn-metadata-agent
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: + ARGS=
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: + sudo kolla_copy_cacerts
Jan 31 07:00:17 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:17 compute-1 systemd[1]: Reloading.
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: + [[ ! -n '' ]]
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: + . kolla_extend_start
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: Running command: 'neutron-ovn-metadata-agent'
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: + umask 0022
Jan 31 07:00:17 compute-1 ovn_metadata_agent[140078]: + exec neutron-ovn-metadata-agent
Jan 31 07:00:17 compute-1 podman[140085]: 2026-01-31 07:00:17.979703572 +0000 UTC m=+0.074310717 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 07:00:18 compute-1 systemd-rc-local-generator[140152]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:00:18 compute-1 systemd-sysv-generator[140156]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:00:18 compute-1 systemd[1]: Started ovn_metadata_agent container.
Jan 31 07:00:18 compute-1 sudo[140020]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:18.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:18.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:19 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:19 compute-1 ceph-mon[81728]: pgmap v486: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:19 compute-1 python3.9[140318]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.836 140083 INFO neutron.common.config [-] Logging enabled!
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.836 140083 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.837 140083 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.837 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.837 140083 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.837 140083 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.838 140083 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.838 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.838 140083 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.838 140083 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.838 140083 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.838 140083 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.838 140083 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.839 140083 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.839 140083 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.839 140083 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.839 140083 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.839 140083 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.839 140083 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.839 140083 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.839 140083 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.840 140083 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.840 140083 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.840 140083 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.840 140083 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.840 140083 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.840 140083 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.840 140083 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.840 140083 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.841 140083 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.841 140083 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.841 140083 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.841 140083 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.841 140083 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.841 140083 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.841 140083 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.842 140083 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.842 140083 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.842 140083 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.842 140083 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.842 140083 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.842 140083 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.842 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.843 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.843 140083 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.843 140083 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.843 140083 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.843 140083 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.843 140083 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.843 140083 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.843 140083 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.843 140083 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.844 140083 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.844 140083 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.844 140083 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.844 140083 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.844 140083 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.844 140083 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.844 140083 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.844 140083 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.845 140083 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.845 140083 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.845 140083 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.845 140083 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.845 140083 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.845 140083 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.845 140083 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.846 140083 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.846 140083 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.846 140083 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.846 140083 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.846 140083 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.846 140083 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.846 140083 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.846 140083 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.847 140083 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.847 140083 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.847 140083 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.847 140083 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.847 140083 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.847 140083 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.847 140083 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.847 140083 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.848 140083 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.848 140083 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.848 140083 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.848 140083 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.848 140083 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.848 140083 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.848 140083 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.848 140083 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.848 140083 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.848 140083 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.849 140083 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.849 140083 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.849 140083 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.849 140083 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.849 140083 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.849 140083 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.849 140083 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.849 140083 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.849 140083 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.849 140083 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.849 140083 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.850 140083 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.850 140083 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.850 140083 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.850 140083 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.850 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.850 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.850 140083 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.850 140083 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.850 140083 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.851 140083 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.851 140083 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.851 140083 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.851 140083 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.851 140083 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.851 140083 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.851 140083 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.851 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.852 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.852 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.852 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.852 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.852 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.852 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.852 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.852 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.852 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.852 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.853 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.853 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.853 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.853 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.853 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.853 140083 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.854 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.854 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.854 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.854 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.854 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.854 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.854 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.854 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.855 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.855 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.855 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.855 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.855 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.855 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.855 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.856 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.856 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.856 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.856 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.856 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.856 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.856 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.856 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.856 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.856 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.857 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.857 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.857 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.857 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.857 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.857 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.857 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.857 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.857 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.858 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.858 140083 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.858 140083 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.858 140083 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.858 140083 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.858 140083 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.858 140083 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.859 140083 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.859 140083 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.859 140083 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.859 140083 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.859 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.859 140083 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.859 140083 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.859 140083 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.859 140083 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.860 140083 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.860 140083 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.860 140083 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.860 140083 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.860 140083 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.860 140083 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.860 140083 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.860 140083 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.860 140083 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.860 140083 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.861 140083 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.861 140083 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.861 140083 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.861 140083 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.861 140083 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.861 140083 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.861 140083 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.861 140083 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.861 140083 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.861 140083 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.862 140083 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.862 140083 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.862 140083 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.862 140083 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.862 140083 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.862 140083 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.862 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.862 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.862 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.863 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.863 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.863 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.863 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.863 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.863 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.863 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.863 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.863 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.863 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.864 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.864 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.864 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.864 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.864 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.864 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.864 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.864 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.864 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.865 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.865 140083 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.865 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.865 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.865 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.865 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.865 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.865 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.865 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.866 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.866 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.866 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.866 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.866 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.866 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.866 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.866 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.866 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.867 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.867 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.867 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.867 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.867 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.867 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.867 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.867 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.867 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.868 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.868 140083 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.868 140083 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.868 140083 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.868 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.868 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.868 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.868 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.868 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.868 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.869 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.869 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.869 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.869 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.869 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.869 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.869 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.869 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.869 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.870 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.870 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.870 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.870 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.870 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.870 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.870 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.870 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.870 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.871 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.871 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.871 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.871 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.871 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.871 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.872 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.872 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.872 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.872 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.872 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.872 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.872 140083 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.872 140083 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.933 140083 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.933 140083 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.933 140083 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.934 140083 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.934 140083 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.950 140083 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 3f1b6d5d-330e-4693-ab86-ea25a99a46d7 (UUID: 3f1b6d5d-330e-4693-ab86-ea25a99a46d7) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.975 140083 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.976 140083 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.976 140083 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.976 140083 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.979 140083 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.985 140083 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.991 140083 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '3f1b6d5d-330e-4693-ab86-ea25a99a46d7'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f61c836e880>], external_ids={}, name=3f1b6d5d-330e-4693-ab86-ea25a99a46d7, nb_cfg_timestamp=1769842738960, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.992 140083 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f61c835af70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.993 140083 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.994 140083 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.994 140083 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.994 140083 INFO oslo_service.service [-] Starting 1 workers
Jan 31 07:00:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:19.997 140083 DEBUG oslo_service.service [-] Started child 140345 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 31 07:00:20 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:20.000 140345 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-426463'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 31 07:00:20 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:20.001 140083 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp5cinnibt/privsep.sock']
Jan 31 07:00:20 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:20.018 140345 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 31 07:00:20 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:20.018 140345 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 31 07:00:20 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:20.018 140345 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:00:20 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:20.022 140345 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 07:00:20 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:20.027 140345 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 07:00:20 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:20.033 140345 INFO eventlet.wsgi.server [-] (140345) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 31 07:00:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:20.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:20.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:20 compute-1 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 31 07:00:20 compute-1 sudo[140476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztibabfnvcgfgickltwfirktbzzascqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842820.402198-1428-76762838122162/AnsiballZ_stat.py'
Jan 31 07:00:20 compute-1 sudo[140476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:20 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:20.730 140083 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 31 07:00:20 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:20.732 140083 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp5cinnibt/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 31 07:00:20 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:20.543 140442 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 07:00:20 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:20.547 140442 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 07:00:20 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:20.549 140442 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 31 07:00:20 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:20.549 140442 INFO oslo.privsep.daemon [-] privsep daemon running as pid 140442
Jan 31 07:00:20 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:20.735 140442 DEBUG oslo.privsep.daemon [-] privsep: reply[ce8f319d-04b0-4097-a9a6-429a8ea8374c]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:00:21 compute-1 python3.9[140478]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:00:21 compute-1 sudo[140476]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:21 compute-1 sudo[140606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxmldiiraekbosdjrbtmgniafoixhfsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842820.402198-1428-76762838122162/AnsiballZ_copy.py'
Jan 31 07:00:21 compute-1 sudo[140606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:21 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:21.412 140442 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:00:21 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:21.412 140442 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:00:21 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:21.412 140442 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:00:21 compute-1 python3.9[140608]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842820.402198-1428-76762838122162/.source.yaml _original_basename=.afotml__ follow=False checksum=0432c59ffadcc8d3b3a9efaa1eebb4528ff2936e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:21 compute-1 sudo[140606]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:21 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:21.986 140442 DEBUG oslo.privsep.daemon [-] privsep: reply[c4b24907-a7ad-425e-b893-7584b0f49e47]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:00:21 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:21.989 140083 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=3f1b6d5d-330e-4693-ab86-ea25a99a46d7, column=external_ids, values=({'neutron:ovn-metadata-id': '5ef94d88-e7f2-5c6f-a6a5-3972022e7371'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.002 140083 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3f1b6d5d-330e-4693-ab86-ea25a99a46d7, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.008 140083 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.008 140083 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.008 140083 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.009 140083 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.009 140083 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.009 140083 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.009 140083 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.009 140083 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.009 140083 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.009 140083 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.009 140083 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.010 140083 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.010 140083 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.010 140083 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.010 140083 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.010 140083 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.010 140083 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.011 140083 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.011 140083 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.011 140083 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.011 140083 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.011 140083 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.011 140083 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.011 140083 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.011 140083 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.012 140083 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.012 140083 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.012 140083 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.012 140083 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.012 140083 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.012 140083 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.012 140083 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.013 140083 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.013 140083 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.013 140083 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.013 140083 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.013 140083 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.013 140083 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.014 140083 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.014 140083 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.014 140083 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.014 140083 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.014 140083 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.014 140083 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.014 140083 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.015 140083 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.015 140083 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.015 140083 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.015 140083 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.015 140083 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.015 140083 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.015 140083 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.015 140083 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.016 140083 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.016 140083 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.016 140083 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.016 140083 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.016 140083 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.016 140083 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.016 140083 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.016 140083 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.017 140083 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.017 140083 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.017 140083 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.017 140083 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.017 140083 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.017 140083 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.017 140083 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.017 140083 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.018 140083 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.018 140083 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.018 140083 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.018 140083 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.018 140083 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.018 140083 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.018 140083 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.018 140083 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.019 140083 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.019 140083 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.019 140083 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.019 140083 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.019 140083 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.019 140083 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.019 140083 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.019 140083 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.019 140083 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.020 140083 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.020 140083 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.020 140083 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.020 140083 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.020 140083 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.020 140083 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.020 140083 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.020 140083 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.020 140083 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.020 140083 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.021 140083 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.021 140083 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.021 140083 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.021 140083 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.021 140083 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.021 140083 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.021 140083 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.021 140083 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.021 140083 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.022 140083 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.022 140083 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.022 140083 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.022 140083 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.022 140083 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.022 140083 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.023 140083 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.023 140083 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.023 140083 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.023 140083 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.023 140083 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.023 140083 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.023 140083 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.023 140083 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.024 140083 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.024 140083 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.024 140083 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.024 140083 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.024 140083 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.024 140083 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.025 140083 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.025 140083 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.025 140083 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.025 140083 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.025 140083 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.025 140083 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.025 140083 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.026 140083 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.026 140083 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.026 140083 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.026 140083 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.026 140083 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.026 140083 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.027 140083 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.027 140083 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.027 140083 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.027 140083 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.027 140083 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.027 140083 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.027 140083 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.027 140083 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.027 140083 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.028 140083 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.028 140083 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.028 140083 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.028 140083 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.028 140083 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.028 140083 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.028 140083 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.029 140083 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.029 140083 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.029 140083 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.029 140083 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.029 140083 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.029 140083 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.029 140083 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.029 140083 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.029 140083 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.030 140083 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.030 140083 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.030 140083 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.030 140083 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.030 140083 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.030 140083 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.030 140083 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.030 140083 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.030 140083 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.031 140083 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.031 140083 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.031 140083 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.031 140083 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.031 140083 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.031 140083 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.031 140083 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.031 140083 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.032 140083 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.032 140083 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.032 140083 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.032 140083 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.032 140083 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.032 140083 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.032 140083 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.032 140083 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.032 140083 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.032 140083 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.033 140083 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.033 140083 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.033 140083 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.033 140083 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.033 140083 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.033 140083 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.033 140083 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.033 140083 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.033 140083 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.034 140083 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.034 140083 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.034 140083 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.034 140083 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.034 140083 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.034 140083 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.034 140083 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.034 140083 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.034 140083 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.034 140083 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.035 140083 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.035 140083 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.035 140083 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.035 140083 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.035 140083 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.035 140083 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.035 140083 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.035 140083 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.035 140083 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.035 140083 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.036 140083 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.036 140083 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.036 140083 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.036 140083 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.036 140083 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.036 140083 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.036 140083 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.036 140083 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.036 140083 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.036 140083 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.037 140083 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.037 140083 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.037 140083 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.037 140083 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.037 140083 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.037 140083 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.037 140083 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.037 140083 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.037 140083 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.037 140083 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.038 140083 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.038 140083 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.038 140083 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.038 140083 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.038 140083 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.038 140083 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.038 140083 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.038 140083 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.038 140083 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.039 140083 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.039 140083 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.039 140083 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.039 140083 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.039 140083 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.039 140083 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.039 140083 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.039 140083 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.039 140083 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.039 140083 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.040 140083 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.040 140083 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.040 140083 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.040 140083 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.040 140083 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.040 140083 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.040 140083 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.040 140083 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.040 140083 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.040 140083 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.041 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.041 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.041 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.041 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.041 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.041 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.041 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.041 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.041 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.042 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.042 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.042 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.042 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.042 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.042 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.042 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.042 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.042 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.042 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.043 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.043 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.043 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.043 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.043 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.043 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.043 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.043 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.044 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.044 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.044 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.044 140083 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.044 140083 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.044 140083 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.044 140083 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.045 140083 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:00:22 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:00:22.045 140083 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 07:00:22 compute-1 sshd-session[131553]: Connection closed by 192.168.122.30 port 41218
Jan 31 07:00:22 compute-1 sshd-session[131550]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:00:22 compute-1 systemd[1]: session-47.scope: Deactivated successfully.
Jan 31 07:00:22 compute-1 systemd[1]: session-47.scope: Consumed 46.009s CPU time.
Jan 31 07:00:22 compute-1 systemd-logind[788]: Session 47 logged out. Waiting for processes to exit.
Jan 31 07:00:22 compute-1 systemd-logind[788]: Removed session 47.
Jan 31 07:00:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:22.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:22 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:00:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:22.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:22 compute-1 ceph-mon[81728]: pgmap v487: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:23 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 564 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:00:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:23 compute-1 ceph-mon[81728]: pgmap v488: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:00:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:24.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:00:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:24.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:25 compute-1 ceph-mon[81728]: pgmap v489: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:25 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:26.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:26.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:26 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:26 compute-1 ceph-mon[81728]: pgmap v490: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:27 compute-1 podman[140636]: 2026-01-31 07:00:27.17591278 +0000 UTC m=+0.092534116 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 07:00:27 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:00:27 compute-1 sshd-session[140662]: Accepted publickey for zuul from 192.168.122.30 port 47248 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 07:00:27 compute-1 systemd-logind[788]: New session 48 of user zuul.
Jan 31 07:00:27 compute-1 systemd[1]: Started Session 48 of User zuul.
Jan 31 07:00:27 compute-1 sshd-session[140662]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:00:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:28.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:28.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:28 compute-1 python3.9[140815]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:00:29 compute-1 sudo[140969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnlrvnzohrvvapisucdqzqvtyepuadnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842829.0976458-62-117496667339979/AnsiballZ_command.py'
Jan 31 07:00:29 compute-1 sudo[140969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:29 compute-1 python3.9[140971]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:00:29 compute-1 sudo[140969]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:30 compute-1 ceph-mon[81728]: pgmap v491: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:00:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:30.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:00:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:30.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:31 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:31 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 569 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:00:31 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:31 compute-1 ceph-mon[81728]: pgmap v492: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:31 compute-1 sudo[141134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjeltnlpnpzvqojrnvupmvyuaewdopak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842830.7522125-95-237581308652791/AnsiballZ_systemd_service.py'
Jan 31 07:00:31 compute-1 sudo[141134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:31 compute-1 python3.9[141136]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:00:31 compute-1 systemd[1]: Reloading.
Jan 31 07:00:31 compute-1 systemd-rc-local-generator[141157]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:00:31 compute-1 systemd-sysv-generator[141164]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:00:31 compute-1 sudo[141134]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:32.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:32 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:00:32 compute-1 python3.9[141320]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:00:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:32.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:32 compute-1 network[141337]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:00:32 compute-1 network[141338]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:00:32 compute-1 network[141339]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:00:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:33 compute-1 ceph-mon[81728]: pgmap v493: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:00:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:34.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:00:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:34.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:35 compute-1 sudo[141599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhtrrnflmuzdopjxlsqnzkucbzcufvve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842835.2613494-152-41246674067200/AnsiballZ_systemd_service.py'
Jan 31 07:00:35 compute-1 sudo[141599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:35 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 574 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:00:35 compute-1 ceph-mon[81728]: pgmap v494: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:35 compute-1 python3.9[141601]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:00:35 compute-1 sudo[141599]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:36 compute-1 sudo[141752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewzwepoympmwlqewqckifjtxjrqitwol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842835.9463913-152-156699199507260/AnsiballZ_systemd_service.py'
Jan 31 07:00:36 compute-1 sudo[141752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:36.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:36 compute-1 python3.9[141754]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:00:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:36.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:36 compute-1 sudo[141752]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:37 compute-1 sudo[141905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfqqkeqqcfhyjmpxnjqqoqmtjiuavwho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842836.6677907-152-194781471958203/AnsiballZ_systemd_service.py'
Jan 31 07:00:37 compute-1 sudo[141905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:37 compute-1 python3.9[141907]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:00:37 compute-1 sudo[141905]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:37 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:00:37 compute-1 sudo[142058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nasgmieuxcaoynambvfpxqtkhopobwot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842837.472277-152-198042544739745/AnsiballZ_systemd_service.py'
Jan 31 07:00:37 compute-1 sudo[142058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:37 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:37 compute-1 ceph-mon[81728]: pgmap v495: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:38 compute-1 python3.9[142060]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:00:38 compute-1 sudo[142058]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:38 compute-1 sudo[142211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iecgiazqvhilcrnfxjythvhfytvltmvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842838.1649656-152-98950348956936/AnsiballZ_systemd_service.py'
Jan 31 07:00:38 compute-1 sudo[142211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:38.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:38.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:38 compute-1 python3.9[142213]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:00:38 compute-1 sudo[142211]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:38 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:38 compute-1 ceph-mon[81728]: pgmap v496: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:39 compute-1 sudo[142364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxgzkfnorrlrqrjkkflexrgotndmczpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842838.829797-152-244063535339137/AnsiballZ_systemd_service.py'
Jan 31 07:00:39 compute-1 sudo[142364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:39 compute-1 python3.9[142366]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:00:39 compute-1 sudo[142364]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:39 compute-1 sudo[142517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uybhomcpixcvbbcjzdgpssygkmsrzuhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842839.4972565-152-118703807658059/AnsiballZ_systemd_service.py'
Jan 31 07:00:39 compute-1 sudo[142517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:39 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:39 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:00:40 compute-1 python3.9[142519]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:00:40 compute-1 sudo[142517]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:40.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:00:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:40.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:00:40 compute-1 sudo[142670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmubxipuaiedpttkxsldrbvqbdlonpox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842840.509694-308-270175318057825/AnsiballZ_file.py'
Jan 31 07:00:40 compute-1 sudo[142670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:40 compute-1 ceph-mon[81728]: pgmap v497: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:41 compute-1 python3.9[142672]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:41 compute-1 sudo[142670]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:41 compute-1 sudo[142822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zetptzshndxgbdstpcthymrdexprajtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842841.1934187-308-122345726817872/AnsiballZ_file.py'
Jan 31 07:00:41 compute-1 sudo[142822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:41 compute-1 python3.9[142824]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:41 compute-1 sudo[142822]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:41 compute-1 sudo[142974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlqccitorrojvyldxytmotgegketkmvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842841.7302024-308-247495377210207/AnsiballZ_file.py'
Jan 31 07:00:41 compute-1 sudo[142974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:42 compute-1 python3.9[142976]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:42 compute-1 sudo[142974]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:42.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:42 compute-1 sudo[143126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgfnwegdddbevmzpidbmvlazzydxtlfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842842.2781332-308-199441578124035/AnsiballZ_file.py'
Jan 31 07:00:42 compute-1 sudo[143126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:42 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:00:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:42.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:42 compute-1 python3.9[143128]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:42 compute-1 sudo[143126]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:42 compute-1 sudo[143278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxcfeahjjwdbygyarguyfzeoloinkwtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842842.7896888-308-77130942007286/AnsiballZ_file.py'
Jan 31 07:00:42 compute-1 sudo[143278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:43 compute-1 python3.9[143280]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:43 compute-1 sudo[143278]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:43 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:43 compute-1 ceph-mon[81728]: pgmap v498: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:43 compute-1 sudo[143430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yebggaquuijejqrunfowviueignisiwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842843.2938983-308-225667941209670/AnsiballZ_file.py'
Jan 31 07:00:43 compute-1 sudo[143430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:43 compute-1 python3.9[143432]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:43 compute-1 sudo[143430]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:44 compute-1 sudo[143582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orixvhqomzjbcqzisqwhboqbuujdript ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842843.8211813-308-226463418251234/AnsiballZ_file.py'
Jan 31 07:00:44 compute-1 sudo[143582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:44 compute-1 python3.9[143584]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:44 compute-1 sudo[143582]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:44.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:44.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:44 compute-1 sudo[143734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsaeppugigpsgpaeluirtltxouitgmaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842844.6304843-458-181903367977724/AnsiballZ_file.py'
Jan 31 07:00:44 compute-1 sudo[143734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:45 compute-1 python3.9[143736]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:45 compute-1 sudo[143734]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:45 compute-1 sudo[143886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hojunlprsffdnjvblqmeexljooqzqumg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842845.386411-458-201401941796527/AnsiballZ_file.py'
Jan 31 07:00:45 compute-1 sudo[143886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:45 compute-1 python3.9[143888]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:46 compute-1 sudo[143886]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:46 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:00:46 compute-1 sudo[144038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztjdtcysjiycjagevvnifmrvztkofbvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842846.1053927-458-243255027203959/AnsiballZ_file.py'
Jan 31 07:00:46 compute-1 sudo[144038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:46.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:46 compute-1 python3.9[144040]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:46 compute-1 sudo[144038]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:46.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:46 compute-1 sudo[144190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwykckqtvvvucqlphzodmmhlwdvnhabu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842846.623587-458-199056970859008/AnsiballZ_file.py'
Jan 31 07:00:46 compute-1 sudo[144190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:47 compute-1 python3.9[144192]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:47 compute-1 sudo[144190]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:47 compute-1 ceph-mon[81728]: pgmap v499: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:47 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:47 compute-1 sudo[144342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-touxmkyoemxppmwpcttdbfkaazujzmep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842847.1313481-458-76327659153419/AnsiballZ_file.py'
Jan 31 07:00:47 compute-1 sudo[144342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:47 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:00:47 compute-1 python3.9[144344]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:47 compute-1 sudo[144342]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:47 compute-1 sudo[144494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psartwkriczllfjjhvbhtasafvgshmki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842847.6575446-458-148796121871721/AnsiballZ_file.py'
Jan 31 07:00:47 compute-1 sudo[144494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:48 compute-1 python3.9[144496]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:48 compute-1 sudo[144494]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:48 compute-1 sudo[144655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuysjrzxsepwsueraksoyecrashkvfrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842848.1822689-458-103223927869704/AnsiballZ_file.py'
Jan 31 07:00:48 compute-1 sudo[144655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:48 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:48 compute-1 ceph-mon[81728]: pgmap v500: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:48 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:48 compute-1 podman[144620]: 2026-01-31 07:00:48.459901557 +0000 UTC m=+0.079658757 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:00:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:48.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:48.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:48 compute-1 python3.9[144661]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:48 compute-1 sudo[144655]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:49 compute-1 sudo[144817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pryrkbslkhdvzpvzubjbgamektqkzzra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842848.8568242-611-91396377837866/AnsiballZ_command.py'
Jan 31 07:00:49 compute-1 sudo[144817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:49 compute-1 python3.9[144819]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:00:49 compute-1 sudo[144817]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:50 compute-1 python3.9[144971]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 07:00:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:50.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:00:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:50.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:00:50 compute-1 sudo[145121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yygeaejrpczazpkrpfoucutsqqheuacc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842850.430824-665-109927622180429/AnsiballZ_systemd_service.py'
Jan 31 07:00:50 compute-1 sudo[145121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:50 compute-1 ceph-mon[81728]: pgmap v501: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:50 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 588 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:00:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:50 compute-1 python3.9[145123]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:00:51 compute-1 systemd[1]: Reloading.
Jan 31 07:00:51 compute-1 systemd-rc-local-generator[145147]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:00:51 compute-1 systemd-sysv-generator[145152]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:00:51 compute-1 sudo[145121]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:51 compute-1 sudo[145307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clmcfngffykurkcqrlzrgregxwrljgts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842851.4187229-689-226765101746307/AnsiballZ_command.py'
Jan 31 07:00:51 compute-1 sudo[145307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:51 compute-1 python3.9[145309]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:00:51 compute-1 sudo[145307]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:51 compute-1 ceph-mon[81728]: pgmap v502: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:51 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:52 compute-1 sudo[145460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndvzpgixmujmxclxrcwvahppmfxvtlip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842851.9846344-689-39215931304376/AnsiballZ_command.py'
Jan 31 07:00:52 compute-1 sudo[145460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:52 compute-1 python3.9[145462]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:00:52 compute-1 sudo[145460]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:52.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:52 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:00:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:00:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:52.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:00:52 compute-1 sudo[145613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byohxfbaivtawoiwxmhyiqxsyomtddpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842852.511911-689-180912281232379/AnsiballZ_command.py'
Jan 31 07:00:52 compute-1 sudo[145613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:52 compute-1 python3.9[145615]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:00:52 compute-1 sudo[145613]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:53 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:53 compute-1 sudo[145766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajevlbkjrivsiiabjmtmryqntzgyetvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842853.0077229-689-268413429365994/AnsiballZ_command.py'
Jan 31 07:00:53 compute-1 sudo[145766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:53 compute-1 python3.9[145768]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:00:53 compute-1 sudo[145766]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:53 compute-1 sudo[145919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snyapbabtzlhlrxzbdskvptepejztuxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842853.5700643-689-41667220074652/AnsiballZ_command.py'
Jan 31 07:00:53 compute-1 sudo[145919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:53 compute-1 python3.9[145921]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:00:53 compute-1 sudo[145919]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:54 compute-1 ceph-mon[81728]: pgmap v503: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:54 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:54 compute-1 sudo[146072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgahubximvgovpbngqfzgdtemvnxlvvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842854.1243591-689-133173052632292/AnsiballZ_command.py'
Jan 31 07:00:54 compute-1 sudo[146072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:54.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:54 compute-1 python3.9[146074]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:00:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:54.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:54 compute-1 sudo[146072]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:54 compute-1 sudo[146225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsrhntjevlyciuhobnlqcscrskgjseit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842854.7041569-689-229817140156119/AnsiballZ_command.py'
Jan 31 07:00:54 compute-1 sudo[146225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:55 compute-1 python3.9[146227]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:00:55 compute-1 sudo[146225]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:55 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:00:56 compute-1 sudo[146378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmdthvosgdwuzngfrpkvxhosvrpqzwof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842855.9365292-851-84062068724016/AnsiballZ_getent.py'
Jan 31 07:00:56 compute-1 sudo[146378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:56 compute-1 ceph-mon[81728]: pgmap v504: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:56.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:56.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:56 compute-1 python3.9[146380]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 31 07:00:56 compute-1 sudo[146378]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:57 compute-1 sudo[146540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtedonuiqjhwtymqaxwvrisgpbolapfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842856.83457-875-131258101177909/AnsiballZ_group.py'
Jan 31 07:00:57 compute-1 sudo[146540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:57 compute-1 podman[146505]: 2026-01-31 07:00:57.310200681 +0000 UTC m=+0.101763665 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 07:00:57 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:57 compute-1 python3.9[146548]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 07:00:57 compute-1 groupadd[146560]: group added to /etc/group: name=libvirt, GID=42473
Jan 31 07:00:57 compute-1 groupadd[146560]: group added to /etc/gshadow: name=libvirt
Jan 31 07:00:57 compute-1 groupadd[146560]: new group: name=libvirt, GID=42473
Jan 31 07:00:57 compute-1 sudo[146540]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:57 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:00:58 compute-1 sudo[146715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnvzuvewvwdyyimyzrfgqdbnmxdobola ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842857.704944-899-208521536941238/AnsiballZ_user.py'
Jan 31 07:00:58 compute-1 sudo[146715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:58 compute-1 ceph-mon[81728]: pgmap v505: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:00:58 compute-1 python3.9[146717]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 07:00:58 compute-1 useradd[146719]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 07:00:58 compute-1 sudo[146715]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:00:58.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:00:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:00:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:00:58.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:00:59 compute-1 sudo[146875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdfgywlurfbwqctrspuxrsobwcthfoni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842858.9500608-932-73804110394793/AnsiballZ_setup.py'
Jan 31 07:00:59 compute-1 sudo[146875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:59 compute-1 python3.9[146877]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:00:59 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:00:59 compute-1 sudo[146875]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:00 compute-1 sudo[146959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xghkpzimlhxhwogicwqaerathofalyrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842858.9500608-932-73804110394793/AnsiballZ_dnf.py'
Jan 31 07:01:00 compute-1 sudo[146959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:01:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:00.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:01:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:01:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:00.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:01:00 compute-1 python3.9[146961]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:01:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:00 compute-1 ceph-mon[81728]: pgmap v506: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:00 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 599 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:01:01 compute-1 CROND[146964]: (root) CMD (run-parts /etc/cron.hourly)
Jan 31 07:01:01 compute-1 run-parts[146967]: (/etc/cron.hourly) starting 0anacron
Jan 31 07:01:01 compute-1 run-parts[146973]: (/etc/cron.hourly) finished 0anacron
Jan 31 07:01:01 compute-1 CROND[146963]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 31 07:01:01 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:01 compute-1 ceph-mon[81728]: pgmap v507: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:02 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:01:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:01:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:02.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:01:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:02.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:03 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:01:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:04.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:01:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:04.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:04 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:04 compute-1 ceph-mon[81728]: pgmap v508: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:04 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:04 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 603 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:01:05 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:05 compute-1 ceph-mon[81728]: pgmap v509: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:05 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:06.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:06.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:07 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:01:07 compute-1 ceph-mon[81728]: pgmap v510: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:08.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:01:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:08.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:01:08 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:09 compute-1 ceph-mon[81728]: pgmap v511: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:09 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:09 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 608 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:01:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:01:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:10.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:01:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:10.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:12 compute-1 ceph-mon[81728]: pgmap v512: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 07:01:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:12 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:01:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:12.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:12.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:14 compute-1 ceph-mon[81728]: pgmap v513: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 07:01:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:14 compute-1 sudo[147155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:01:14 compute-1 sudo[147155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:01:14 compute-1 sudo[147155]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:14 compute-1 sudo[147180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:01:14 compute-1 sudo[147180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:01:14 compute-1 sudo[147180]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:14 compute-1 sudo[147205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:01:14 compute-1 sudo[147205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:01:14 compute-1 sudo[147205]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:14 compute-1 sudo[147230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:01:14 compute-1 sudo[147230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:01:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:01:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:14.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:01:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:01:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:14.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:01:14 compute-1 sudo[147230]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:15 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:15 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 613 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:01:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 07:01:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:01:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:01:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:01:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:01:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:01:15 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:01:16 compute-1 ceph-mon[81728]: pgmap v514: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Jan 31 07:01:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:16.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:16.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:17 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:17 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:01:18 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:18 compute-1 ceph-mon[81728]: pgmap v515: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Jan 31 07:01:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:18.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:18.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:19 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:19 compute-1 podman[147292]: 2026-01-31 07:01:19.205782644 +0000 UTC m=+0.104253794 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 07:01:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:01:19.875 140083 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:01:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:01:19.876 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:01:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:01:19.876 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:01:20 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:20 compute-1 ceph-mon[81728]: pgmap v516: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Jan 31 07:01:20 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:01:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:20.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:20.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:20 compute-1 sudo[147311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:01:20 compute-1 sudo[147311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:01:20 compute-1 sudo[147311]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:20 compute-1 sudo[147336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:01:20 compute-1 sudo[147336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:01:20 compute-1 sudo[147336]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:21 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:21 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:01:21 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:21 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:01:22 compute-1 ceph-mon[81728]: pgmap v517: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Jan 31 07:01:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:22 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:01:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:01:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:22.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:01:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:22.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:24 compute-1 ceph-mon[81728]: pgmap v518: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 167 op/s
Jan 31 07:01:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:24.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:24.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:25 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 624 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:01:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:26.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:26 compute-1 ceph-mon[81728]: pgmap v519: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 167 op/s
Jan 31 07:01:26 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:01:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:26.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:01:27 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:01:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:27 compute-1 ceph-mon[81728]: pgmap v520: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:28 compute-1 podman[147362]: 2026-01-31 07:01:28.167250394 +0000 UTC m=+0.089689457 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 07:01:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:01:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:28.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:01:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:28.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:28 compute-1 kernel: SELinux:  Converting 2777 SID table entries...
Jan 31 07:01:28 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:01:28 compute-1 kernel: SELinux:  policy capability open_perms=1
Jan 31 07:01:28 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:01:28 compute-1 kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:01:28 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:01:28 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:01:28 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:01:29 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:29 compute-1 ceph-mon[81728]: pgmap v521: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:29 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:01:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:30.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:30.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:31 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:31 compute-1 ceph-mon[81728]: pgmap v522: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:32 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:01:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:32.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:32.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:33 compute-1 ceph-mon[81728]: pgmap v523: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:34.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:34.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:34 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:01:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:36 compute-1 ceph-mon[81728]: pgmap v524: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:01:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:36.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:01:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:01:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:36.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:01:37 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:37 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:37 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:01:38 compute-1 ceph-mon[81728]: pgmap v525: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:38 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:38.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:38.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:39 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:40 compute-1 ceph-mon[81728]: pgmap v526: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:40 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:01:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:40.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:01:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:40.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:01:41 compute-1 kernel: SELinux:  Converting 2777 SID table entries...
Jan 31 07:01:41 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:01:41 compute-1 kernel: SELinux:  policy capability open_perms=1
Jan 31 07:01:41 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:01:41 compute-1 kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:01:41 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:01:41 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:01:41 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:01:41 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:42 compute-1 ceph-mon[81728]: pgmap v527: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:42 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:01:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:42.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:42.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:43 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:44 compute-1 ceph-mon[81728]: pgmap v528: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:01:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:44.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:01:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:44.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:45 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:01:45 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:46.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:46 compute-1 ceph-mon[81728]: pgmap v529: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:01:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:46.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:01:47 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:01:47 compute-1 ceph-mon[81728]: pgmap v530: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:47 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:48.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:48.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:48 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:49 compute-1 ceph-mon[81728]: pgmap v531: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:49 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:01:50 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 31 07:01:50 compute-1 podman[147402]: 2026-01-31 07:01:50.129582804 +0000 UTC m=+0.047914760 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 07:01:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:50.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:50.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:51 compute-1 ceph-mon[81728]: pgmap v532: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:51 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:52 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:01:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:52.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:01:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:52.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:01:53 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:54 compute-1 ceph-mon[81728]: pgmap v533: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:54 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:54.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:54.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:55 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:01:56 compute-1 ceph-mon[81728]: pgmap v534: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:56.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:56.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:57 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:01:57 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:01:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:01:58.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:01:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:01:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:01:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:01:58.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:01:58 compute-1 ceph-mon[81728]: pgmap v535: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:59 compute-1 podman[153254]: 2026-01-31 07:01:59.15679725 +0000 UTC m=+0.071576160 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 07:01:59 compute-1 ceph-mon[81728]: pgmap v536: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:01:59 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:01:59 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:02:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:02:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:00.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:02:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:00.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:01 compute-1 ceph-mon[81728]: pgmap v537: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:01 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:02 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:02:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:02.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:02:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:02.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:02:02 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:03 compute-1 ceph-mon[81728]: pgmap v538: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:03 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:04.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:02:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:04.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:02:04 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:04 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:02:05 compute-1 sshd-session[159831]: Invalid user node from 2.57.122.238 port 33948
Jan 31 07:02:06 compute-1 sshd-session[159831]: Connection closed by invalid user node 2.57.122.238 port 33948 [preauth]
Jan 31 07:02:06 compute-1 ceph-mon[81728]: pgmap v539: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:06.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:06.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:07 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:02:08 compute-1 ceph-mon[81728]: pgmap v540: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:08 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:02:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:08.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:02:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:02:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:08.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:02:09 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:09 compute-1 ceph-mon[81728]: pgmap v541: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:09 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:09 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 668 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:02:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:10.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:10.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:10 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:12 compute-1 ceph-mon[81728]: pgmap v542: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:02:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:12.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:02:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:12.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:13 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:02:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:14 compute-1 ceph-mon[81728]: pgmap v543: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:14.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:14.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:15 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:15 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 673 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:02:16 compute-1 ceph-mon[81728]: pgmap v544: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:16 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:16.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:16.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:18 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:02:18 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:18 compute-1 ceph-mon[81728]: pgmap v545: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:18.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:18.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:19 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:19 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:02:19.877 140083 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:02:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:02:19.878 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:02:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:02:19.878 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:02:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:20.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:20.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:20 compute-1 ceph-mon[81728]: pgmap v546: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:20 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:20 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 678 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:02:20 compute-1 sudo[164321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:02:20 compute-1 sudo[164321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:02:20 compute-1 sudo[164321]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:20 compute-1 sudo[164352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:02:20 compute-1 sudo[164352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:02:20 compute-1 sudo[164352]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:20 compute-1 podman[164345]: 2026-01-31 07:02:20.949481389 +0000 UTC m=+0.044770642 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 07:02:20 compute-1 sudo[164390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:02:20 compute-1 sudo[164390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:02:20 compute-1 sudo[164390]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:21 compute-1 sudo[164415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:02:21 compute-1 sudo[164415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:02:21 compute-1 sudo[164415]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:21 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:21 compute-1 ceph-mon[81728]: pgmap v547: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:21 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:02:21 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:02:21 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:02:21 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:02:21 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:02:21 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:02:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:22.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:22.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:23 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:02:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:23 compute-1 ceph-mon[81728]: pgmap v548: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:02:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:24.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:02:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:24.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:25 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:25 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 684 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:02:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:02:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:26.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:02:26 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:26 compute-1 ceph-mon[81728]: pgmap v549: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:02:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:26.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:02:27 compute-1 sudo[164476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:02:27 compute-1 sudo[164476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:02:27 compute-1 sudo[164476]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:27 compute-1 sudo[164501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:02:27 compute-1 sudo[164501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:02:27 compute-1 sudo[164501]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:28 compute-1 ceph-mon[81728]: pgmap v550: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:02:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:02:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:28 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:02:28 compute-1 kernel: SELinux:  Converting 2778 SID table entries...
Jan 31 07:02:28 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:02:28 compute-1 kernel: SELinux:  policy capability open_perms=1
Jan 31 07:02:28 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:02:28 compute-1 kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:02:28 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:02:28 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:02:28 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:02:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:28.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:28.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:29 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 31 07:02:29 compute-1 groupadd[164535]: group added to /etc/group: name=dnsmasq, GID=993
Jan 31 07:02:29 compute-1 groupadd[164535]: group added to /etc/gshadow: name=dnsmasq
Jan 31 07:02:29 compute-1 groupadd[164535]: new group: name=dnsmasq, GID=993
Jan 31 07:02:29 compute-1 useradd[164555]: new user: name=dnsmasq, UID=992, GID=993, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 31 07:02:29 compute-1 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Jan 31 07:02:29 compute-1 podman[164534]: 2026-01-31 07:02:29.358139955 +0000 UTC m=+0.072254490 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 07:02:29 compute-1 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Jan 31 07:02:29 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:30 compute-1 groupadd[164581]: group added to /etc/group: name=clevis, GID=992
Jan 31 07:02:30 compute-1 groupadd[164581]: group added to /etc/gshadow: name=clevis
Jan 31 07:02:30 compute-1 groupadd[164581]: new group: name=clevis, GID=992
Jan 31 07:02:30 compute-1 useradd[164588]: new user: name=clevis, UID=991, GID=992, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 31 07:02:30 compute-1 usermod[164598]: add 'clevis' to group 'tss'
Jan 31 07:02:30 compute-1 usermod[164598]: add 'clevis' to shadow group 'tss'
Jan 31 07:02:30 compute-1 ceph-mon[81728]: pgmap v551: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:30 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 688 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:02:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:30.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:30.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:32 compute-1 polkitd[43509]: Reloading rules
Jan 31 07:02:32 compute-1 polkitd[43509]: Collecting garbage unconditionally...
Jan 31 07:02:32 compute-1 polkitd[43509]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 07:02:32 compute-1 polkitd[43509]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 07:02:32 compute-1 polkitd[43509]: Finished loading, compiling and executing 3 rules
Jan 31 07:02:32 compute-1 polkitd[43509]: Reloading rules
Jan 31 07:02:32 compute-1 polkitd[43509]: Collecting garbage unconditionally...
Jan 31 07:02:32 compute-1 polkitd[43509]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 07:02:32 compute-1 polkitd[43509]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 07:02:32 compute-1 polkitd[43509]: Finished loading, compiling and executing 3 rules
Jan 31 07:02:32 compute-1 ceph-mon[81728]: pgmap v552: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:32.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:32.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:33 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:02:33 compute-1 groupadd[164788]: group added to /etc/group: name=ceph, GID=167
Jan 31 07:02:33 compute-1 groupadd[164788]: group added to /etc/gshadow: name=ceph
Jan 31 07:02:33 compute-1 groupadd[164788]: new group: name=ceph, GID=167
Jan 31 07:02:33 compute-1 useradd[164794]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 31 07:02:33 compute-1 ceph-mon[81728]: pgmap v553: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:02:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:34.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:02:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:34.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:34 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 693 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:02:35 compute-1 ceph-mon[81728]: pgmap v554: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:35 compute-1 systemd[1]: Stopping OpenSSH server daemon...
Jan 31 07:02:35 compute-1 sshd[1005]: Received signal 15; terminating.
Jan 31 07:02:35 compute-1 systemd[1]: sshd.service: Deactivated successfully.
Jan 31 07:02:35 compute-1 systemd[1]: Stopped OpenSSH server daemon.
Jan 31 07:02:35 compute-1 systemd[1]: sshd.service: Consumed 2.327s CPU time, read 32.0K from disk, written 20.0K to disk.
Jan 31 07:02:35 compute-1 systemd[1]: Stopped target sshd-keygen.target.
Jan 31 07:02:35 compute-1 systemd[1]: Stopping sshd-keygen.target...
Jan 31 07:02:35 compute-1 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 07:02:35 compute-1 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 07:02:35 compute-1 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 07:02:35 compute-1 systemd[1]: Reached target sshd-keygen.target.
Jan 31 07:02:35 compute-1 systemd[1]: Starting OpenSSH server daemon...
Jan 31 07:02:35 compute-1 sshd[165399]: Server listening on 0.0.0.0 port 22.
Jan 31 07:02:35 compute-1 sshd[165399]: Server listening on :: port 22.
Jan 31 07:02:35 compute-1 systemd[1]: Started OpenSSH server daemon.
Jan 31 07:02:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:02:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:36.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:02:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:36.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:37 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:02:37 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:02:37 compute-1 systemd[1]: Reloading.
Jan 31 07:02:37 compute-1 systemd-rc-local-generator[165650]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:37 compute-1 systemd-sysv-generator[165659]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:37 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:02:38 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:02:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:38.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:38.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:02:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:40.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:02:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:02:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:40.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:02:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:42.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:02:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:42.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:02:42 compute-1 ceph-mds[84120]: mds.beacon.cephfs.compute-1.hhzmle missed beacon ack from the monitors
Jan 31 07:02:43 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:02:44 compute-1 ceph-mon[81728]: pgmap v555: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:44 compute-1 ceph-mon[81728]: pgmap v556: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:44 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:02:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:44 compute-1 ceph-mon[81728]: pgmap v557: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:44.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:02:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:44.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:02:45 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:45 compute-1 ceph-mon[81728]: pgmap v558: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:45 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:45 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:45 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 709 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:02:45 compute-1 sudo[146959]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:45 compute-1 sudo[174191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhtadzzlznjsqnldlxsalevoqcdtummo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842965.355101-968-183584286072161/AnsiballZ_systemd.py'
Jan 31 07:02:45 compute-1 sudo[174191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:45 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:02:45 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:02:45 compute-1 systemd[1]: man-db-cache-update.service: Consumed 6.897s CPU time.
Jan 31 07:02:45 compute-1 systemd[1]: run-rdde5a2004dc946f392b04c1708a829f8.service: Deactivated successfully.
Jan 31 07:02:46 compute-1 ceph-mon[81728]: pgmap v559: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:46 compute-1 python3.9[174193]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:02:46 compute-1 systemd[1]: Reloading.
Jan 31 07:02:46 compute-1 systemd-rc-local-generator[174219]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:46 compute-1 systemd-sysv-generator[174224]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:46 compute-1 sudo[174191]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:46.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:46.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:46 compute-1 sudo[174381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjetcolgtvdjukvwdekxotyquuuwcson ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842966.599108-968-114228973115697/AnsiballZ_systemd.py'
Jan 31 07:02:46 compute-1 sudo[174381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:47 compute-1 python3.9[174383]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:02:47 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:47 compute-1 systemd[1]: Reloading.
Jan 31 07:02:47 compute-1 systemd-rc-local-generator[174408]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:47 compute-1 systemd-sysv-generator[174411]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:47 compute-1 sudo[174381]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:47 compute-1 sudo[174571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csoootwdnbivqnmfqpsrrexlohltstwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842967.5519738-968-73391302969554/AnsiballZ_systemd.py'
Jan 31 07:02:47 compute-1 sudo[174571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:48 compute-1 python3.9[174573]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:02:48 compute-1 ceph-mon[81728]: pgmap v560: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:48 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:48 compute-1 systemd[1]: Reloading.
Jan 31 07:02:48 compute-1 systemd-sysv-generator[174606]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:48 compute-1 systemd-rc-local-generator[174603]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:48 compute-1 sudo[174571]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:48.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:48.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:48 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:02:48 compute-1 sudo[174761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erthrwcbhrnmcjvgvbhcvlfywmqfxbah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842968.5768058-968-247676790102845/AnsiballZ_systemd.py'
Jan 31 07:02:48 compute-1 sudo[174761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:49 compute-1 python3.9[174763]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:02:49 compute-1 systemd[1]: Reloading.
Jan 31 07:02:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:49 compute-1 systemd-rc-local-generator[174788]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:49 compute-1 systemd-sysv-generator[174795]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:49 compute-1 sudo[174761]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:02:49.551924) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842969551992, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 3391, "num_deletes": 509, "total_data_size": 6592420, "memory_usage": 6694496, "flush_reason": "Manual Compaction"}
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842969600795, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 4296122, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12633, "largest_seqno": 16019, "table_properties": {"data_size": 4283786, "index_size": 7166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4101, "raw_key_size": 33505, "raw_average_key_size": 20, "raw_value_size": 4254548, "raw_average_value_size": 2614, "num_data_blocks": 312, "num_entries": 1627, "num_filter_entries": 1627, "num_deletions": 509, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842720, "oldest_key_time": 1769842720, "file_creation_time": 1769842969, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 48946 microseconds, and 6830 cpu microseconds.
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:02:49.600852) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 4296122 bytes OK
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:02:49.600899) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:02:49.603477) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:02:49.603503) EVENT_LOG_v1 {"time_micros": 1769842969603496, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:02:49.603523) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 6576430, prev total WAL file size 6576430, number of live WAL files 2.
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:02:49.604543) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323534' seq:0, type:0; will stop at (end)
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(4195KB)], [24(7392KB)]
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842969604619, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 11866353, "oldest_snapshot_seqno": -1}
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 5099 keys, 9720527 bytes, temperature: kUnknown
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842969727250, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 9720527, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9685311, "index_size": 21346, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12805, "raw_key_size": 128147, "raw_average_key_size": 25, "raw_value_size": 9591623, "raw_average_value_size": 1881, "num_data_blocks": 889, "num_entries": 5099, "num_filter_entries": 5099, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842193, "oldest_key_time": 0, "file_creation_time": 1769842969, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:02:49.727514) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 9720527 bytes
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:02:49.729694) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 96.7 rd, 79.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.1, 7.2 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(5.0) write-amplify(2.3) OK, records in: 6134, records dropped: 1035 output_compression: NoCompression
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:02:49.729714) EVENT_LOG_v1 {"time_micros": 1769842969729704, "job": 12, "event": "compaction_finished", "compaction_time_micros": 122710, "compaction_time_cpu_micros": 18426, "output_level": 6, "num_output_files": 1, "total_output_size": 9720527, "num_input_records": 6134, "num_output_records": 5099, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842969730098, "job": 12, "event": "table_file_deletion", "file_number": 26}
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769842969730870, "job": 12, "event": "table_file_deletion", "file_number": 24}
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:02:49.604438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:02:49.730894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:02:49.730898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:02:49.730899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:02:49.730900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:02:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:02:49.730902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:02:50 compute-1 ceph-mon[81728]: pgmap v561: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:50.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:50.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:51 compute-1 podman[174848]: 2026-01-31 07:02:51.132029069 +0000 UTC m=+0.054935236 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 07:02:51 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:51 compute-1 sudo[174969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtazbxgmumczjgtbwrlehykozazctfyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842971.0573668-1055-65071214428367/AnsiballZ_systemd.py'
Jan 31 07:02:51 compute-1 sudo[174969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:51 compute-1 python3.9[174971]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:02:51 compute-1 systemd[1]: Reloading.
Jan 31 07:02:51 compute-1 systemd-rc-local-generator[174999]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:51 compute-1 systemd-sysv-generator[175003]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:51 compute-1 sudo[174969]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:52 compute-1 ceph-mon[81728]: pgmap v562: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:52 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:52 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 714 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:02:52 compute-1 sudo[175159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udjtuztxiylhjgkyrzevtfikuqkdwozx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842972.0564578-1055-246877125738517/AnsiballZ_systemd.py'
Jan 31 07:02:52 compute-1 sudo[175159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:52 compute-1 python3.9[175161]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:02:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:52.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:52 compute-1 systemd[1]: Reloading.
Jan 31 07:02:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:52.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:52 compute-1 systemd-rc-local-generator[175187]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:52 compute-1 systemd-sysv-generator[175192]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:53 compute-1 sudo[175159]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:53 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:53 compute-1 sudo[175349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwsifbivpkpmkqtatjezxgwnzuqoitdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842973.1721497-1055-227200059101154/AnsiballZ_systemd.py'
Jan 31 07:02:53 compute-1 sudo[175349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:53 compute-1 python3.9[175351]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:02:53 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:02:53 compute-1 systemd[1]: Reloading.
Jan 31 07:02:53 compute-1 systemd-sysv-generator[175382]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:53 compute-1 systemd-rc-local-generator[175379]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:54 compute-1 sudo[175349]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:54 compute-1 ceph-mon[81728]: pgmap v563: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:54 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:54 compute-1 sudo[175539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rivyzlfgnweltvpnnwyqmyqvrepkudyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842974.1645186-1055-45377875744796/AnsiballZ_systemd.py'
Jan 31 07:02:54 compute-1 sudo[175539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:54.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:54.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:54 compute-1 python3.9[175541]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:02:54 compute-1 sudo[175539]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:55 compute-1 sudo[175694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cglqsgniobcdolbknobjlufouyybggvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842974.929242-1055-3046646168112/AnsiballZ_systemd.py'
Jan 31 07:02:55 compute-1 sudo[175694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:55 compute-1 python3.9[175696]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:02:55 compute-1 systemd[1]: Reloading.
Jan 31 07:02:55 compute-1 systemd-sysv-generator[175729]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:55 compute-1 systemd-rc-local-generator[175724]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:55 compute-1 sudo[175694]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:56 compute-1 ceph-mon[81728]: pgmap v564: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:56 compute-1 sudo[175884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaimtlowrtygamrsmtluwqtgxqwnfpgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842976.3620458-1163-123745094398224/AnsiballZ_systemd.py'
Jan 31 07:02:56 compute-1 sudo[175884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:56.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:02:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:56.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:02:56 compute-1 python3.9[175886]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:02:56 compute-1 systemd[1]: Reloading.
Jan 31 07:02:57 compute-1 systemd-rc-local-generator[175914]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:57 compute-1 systemd-sysv-generator[175919]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:57 compute-1 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 31 07:02:57 compute-1 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 31 07:02:57 compute-1 sudo[175884]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:57 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:57 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:57 compute-1 sudo[176077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkblcjgzrcqgcdqhpygmkikgjdofozvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842977.5770853-1187-223730938129720/AnsiballZ_systemd.py'
Jan 31 07:02:57 compute-1 sudo[176077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:58 compute-1 python3.9[176079]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:02:58 compute-1 sudo[176077]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:58 compute-1 ceph-mon[81728]: pgmap v565: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:02:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:58 compute-1 sudo[176232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghzhwgeqsfxhuhivwgnjmbjwpjtfnjws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842978.327752-1187-51643642141310/AnsiballZ_systemd.py'
Jan 31 07:02:58 compute-1 sudo[176232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:02:58.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:02:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:02:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:02:58.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:02:58 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:02:58 compute-1 python3.9[176234]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:02:58 compute-1 sudo[176232]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:59 compute-1 sudo[176387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uybnnnqzavlulyyofzwfvlywdktydhay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842979.0626585-1187-149609594750029/AnsiballZ_systemd.py'
Jan 31 07:02:59 compute-1 sudo[176387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:59 compute-1 podman[176389]: 2026-01-31 07:02:59.457679974 +0000 UTC m=+0.072009823 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:02:59 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:02:59 compute-1 python3.9[176390]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:02:59 compute-1 sudo[176387]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:00 compute-1 sudo[176568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oylvxaltwwcgezglhonibsnilndvcmvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842979.8618724-1187-109783374827745/AnsiballZ_systemd.py'
Jan 31 07:03:00 compute-1 sudo[176568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:00 compute-1 python3.9[176570]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:03:00 compute-1 ceph-mon[81728]: pgmap v566: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:00 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 719 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:03:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:00 compute-1 sudo[176568]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:00.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:03:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:00.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:03:01 compute-1 sudo[176723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tllkpdtmtxezfykylapshbyaumwwkmum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842980.8085449-1187-237217708959361/AnsiballZ_systemd.py'
Jan 31 07:03:01 compute-1 sudo[176723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:01 compute-1 python3.9[176725]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:03:01 compute-1 sudo[176723]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:01 compute-1 ceph-mon[81728]: pgmap v567: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:01 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:01 compute-1 sudo[176878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skmmkbqiimiievehbcogwcmmubnoyral ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842981.5492435-1187-116836362412610/AnsiballZ_systemd.py'
Jan 31 07:03:01 compute-1 sudo[176878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:02 compute-1 python3.9[176880]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:03:02 compute-1 sudo[176878]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:02 compute-1 sudo[177033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhbukwyvtxkhgyylenhnekbmhvzataru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842982.277034-1187-188842617990457/AnsiballZ_systemd.py'
Jan 31 07:03:02 compute-1 sudo[177033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:02 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:02.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:02.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:02 compute-1 python3.9[177035]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:03:02 compute-1 sudo[177033]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:03 compute-1 sudo[177188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqzuarnjvbzkycoflliopmvyoordgasd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842982.9424508-1187-269188078672420/AnsiballZ_systemd.py'
Jan 31 07:03:03 compute-1 sudo[177188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:03 compute-1 python3.9[177190]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:03:03 compute-1 sudo[177188]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:03 compute-1 ceph-mon[81728]: pgmap v568: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:03 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:03 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:03:03 compute-1 sudo[177343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mamdoeimjtdpgwdprqtrhlfbwyigblpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842983.6912029-1187-102312087809233/AnsiballZ_systemd.py'
Jan 31 07:03:03 compute-1 sudo[177343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:04 compute-1 python3.9[177345]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:03:04 compute-1 sudo[177343]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:04 compute-1 sudo[177498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pconpkuihdsjxpxqwbkgankegnxydarf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842984.399429-1187-66695349435334/AnsiballZ_systemd.py'
Jan 31 07:03:04 compute-1 sudo[177498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:04 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:04 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 724 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:03:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:03:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:04.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:03:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:04.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:04 compute-1 python3.9[177500]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:03:04 compute-1 sudo[177498]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:05 compute-1 sudo[177653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apnpestycddltwljchqylhhunieatjwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842985.1105626-1187-84463796859012/AnsiballZ_systemd.py'
Jan 31 07:03:05 compute-1 sudo[177653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:05 compute-1 python3.9[177655]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:03:05 compute-1 ceph-mon[81728]: pgmap v569: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:05 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:05 compute-1 sudo[177653]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:06 compute-1 sudo[177808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chcjdupwjzhpouogpcdggrhyftpwexis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842985.798266-1187-275307479230006/AnsiballZ_systemd.py'
Jan 31 07:03:06 compute-1 sudo[177808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:06 compute-1 python3.9[177810]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:03:06 compute-1 sudo[177808]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:06 compute-1 sudo[177963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcvcoirmksvfdvxlkdejxfvuoflivbke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842986.4791386-1187-11227283097264/AnsiballZ_systemd.py'
Jan 31 07:03:06 compute-1 sudo[177963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:06.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:06.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:06 compute-1 python3.9[177965]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:03:07 compute-1 sudo[177963]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:07 compute-1 sudo[178118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyguprurszayrnyyqqmyckmaaewakxzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842987.1567554-1187-104602413640792/AnsiballZ_systemd.py'
Jan 31 07:03:07 compute-1 sudo[178118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:07 compute-1 python3.9[178120]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:03:07 compute-1 ceph-mon[81728]: pgmap v570: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:07 compute-1 sudo[178118]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:03:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:08.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:03:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:03:08 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:08 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:08.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:08 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:03:08 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:10 compute-1 ceph-mon[81728]: pgmap v571: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:10 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:10 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 729 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:03:10 compute-1 sudo[178273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loifbnusgnwouuisqcxrreckgptfxhdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842990.2107615-1493-97125634029177/AnsiballZ_file.py'
Jan 31 07:03:10 compute-1 sudo[178273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:10 compute-1 python3.9[178275]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:03:10 compute-1 sudo[178273]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:03:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:03:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:10.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:03:10 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:10 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:10.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:11 compute-1 sudo[178425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkgpknmqyoflzrqjtkptrvlrfrmlnhkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842990.8149796-1493-200112177701598/AnsiballZ_file.py'
Jan 31 07:03:11 compute-1 sudo[178425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:11 compute-1 python3.9[178427]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:03:11 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:11 compute-1 sudo[178425]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:11 compute-1 sudo[178577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkbkikcygnuujwrxjhnntbnpmkmflogo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842991.4137733-1493-48046538959829/AnsiballZ_file.py'
Jan 31 07:03:11 compute-1 sudo[178577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:11 compute-1 python3.9[178579]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:03:11 compute-1 sudo[178577]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:12 compute-1 sudo[178729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttjnlvokgcbnhedsjxpnzegeicbzephf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842991.9212227-1493-264991572320941/AnsiballZ_file.py'
Jan 31 07:03:12 compute-1 sudo[178729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:12 compute-1 python3.9[178731]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:03:12 compute-1 sudo[178729]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:12 compute-1 ceph-mon[81728]: pgmap v572: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:12 compute-1 sudo[178881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmpnkponlnunnuzkdwryntnadedwbxup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842992.45921-1493-19639765924098/AnsiballZ_file.py'
Jan 31 07:03:12 compute-1 sudo[178881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:03:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:12.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:12 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:03:12 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:12.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:03:12 compute-1 python3.9[178883]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:03:12 compute-1 sudo[178881]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:13 compute-1 sudo[179033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwkfzpuhbroyfvrmvmvaqnfgjlnbtful ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842993.0950472-1493-167684044933682/AnsiballZ_file.py'
Jan 31 07:03:13 compute-1 sudo[179033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:13 compute-1 python3.9[179035]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:03:13 compute-1 sudo[179033]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:13 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:03:14 compute-1 python3.9[179185]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:03:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:14.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:03:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:14.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:03:15 compute-1 ceph-mon[81728]: pgmap v573: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:15 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:15 compute-1 sudo[179335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgxoiglawlknvqimknfxzgdbzprkcfqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842995.3783658-1646-280644478086742/AnsiballZ_stat.py'
Jan 31 07:03:15 compute-1 sudo[179335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:15 compute-1 python3.9[179337]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:15 compute-1 sudo[179335]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:16 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 734 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:03:16 compute-1 ceph-mon[81728]: pgmap v574: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:16 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:16 compute-1 sudo[179460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvfdextfvneipcjjtxskxucitusabhmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842995.3783658-1646-280644478086742/AnsiballZ_copy.py'
Jan 31 07:03:16 compute-1 sudo[179460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:16 compute-1 python3.9[179462]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769842995.3783658-1646-280644478086742/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:16 compute-1 sudo[179460]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:16.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:16.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:17 compute-1 sudo[179612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxuwjrtjboqldshhuwpnphyfrokphrkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842996.7385762-1646-172370467374853/AnsiballZ_stat.py'
Jan 31 07:03:17 compute-1 sudo[179612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:17 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:17 compute-1 python3.9[179614]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:17 compute-1 sudo[179612]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:17 compute-1 sudo[179737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxkzihtglzckqtsetwowjxbmfjuhzsvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842996.7385762-1646-172370467374853/AnsiballZ_copy.py'
Jan 31 07:03:17 compute-1 sudo[179737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:17 compute-1 python3.9[179739]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769842996.7385762-1646-172370467374853/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:17 compute-1 sudo[179737]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:18 compute-1 sudo[179889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psacaxowpxdqfwhdlmulqueghtkseixb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842997.982147-1646-65728418030333/AnsiballZ_stat.py'
Jan 31 07:03:18 compute-1 sudo[179889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:18 compute-1 python3.9[179891]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:18 compute-1 sudo[179889]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:18 compute-1 ceph-mon[81728]: pgmap v575: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:18 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:18.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:18.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:18 compute-1 sudo[180014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twcrartnxvyjyjfvmiibgcdkuywnznpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842997.982147-1646-65728418030333/AnsiballZ_copy.py'
Jan 31 07:03:18 compute-1 sudo[180014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:18 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:03:18 compute-1 python3.9[180016]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769842997.982147-1646-65728418030333/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:19 compute-1 sudo[180014]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:19 compute-1 sudo[180166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szoecmxrpqtqgbrilrbwjxumllaqdxtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842999.1358614-1646-203135738012510/AnsiballZ_stat.py'
Jan 31 07:03:19 compute-1 sudo[180166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:19 compute-1 python3.9[180168]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:19 compute-1 sudo[180166]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:19 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:19 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:03:19.878 140083 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:03:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:03:19.879 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:03:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:03:19.879 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:03:19 compute-1 sudo[180291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngydxtkvwjjukbyxbarxwpjgtruedall ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842999.1358614-1646-203135738012510/AnsiballZ_copy.py'
Jan 31 07:03:19 compute-1 sudo[180291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:20 compute-1 python3.9[180293]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769842999.1358614-1646-203135738012510/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:20 compute-1 sudo[180291]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:20 compute-1 sudo[180443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksecvudhzuuwfhkpqzqdwlnsavpltbay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843000.287567-1646-223361949309734/AnsiballZ_stat.py'
Jan 31 07:03:20 compute-1 sudo[180443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:20.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:20.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:20 compute-1 python3.9[180445]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:20 compute-1 sudo[180443]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:20 compute-1 ceph-mon[81728]: pgmap v576: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:20 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 739 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:03:20 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:21 compute-1 sudo[180568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zppnwhqlkoocfgqjhlyoohngmacporut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843000.287567-1646-223361949309734/AnsiballZ_copy.py'
Jan 31 07:03:21 compute-1 sudo[180568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:21 compute-1 podman[180570]: 2026-01-31 07:03:21.244336525 +0000 UTC m=+0.050900917 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Jan 31 07:03:21 compute-1 python3.9[180571]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769843000.287567-1646-223361949309734/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:21 compute-1 sudo[180568]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:21 compute-1 sudo[180740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nocqvjmcdwzgtnmxoyfujgexsxzwfala ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843001.6842258-1646-105523519476877/AnsiballZ_stat.py'
Jan 31 07:03:21 compute-1 sudo[180740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:21 compute-1 ceph-mon[81728]: pgmap v577: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:21 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:22 compute-1 python3.9[180742]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:22 compute-1 sudo[180740]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:22 compute-1 sudo[180865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyxowartxisyktfdpbwtydgmofpemtpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843001.6842258-1646-105523519476877/AnsiballZ_copy.py'
Jan 31 07:03:22 compute-1 sudo[180865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:22 compute-1 python3.9[180867]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769843001.6842258-1646-105523519476877/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:22 compute-1 sudo[180865]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:03:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:22.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:03:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:03:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:22.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:03:22 compute-1 sudo[181017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maxfdilpbayhgvkzsccxsjoufmrjzkvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843002.7487342-1646-277176209427683/AnsiballZ_stat.py'
Jan 31 07:03:22 compute-1 sudo[181017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:23 compute-1 python3.9[181019]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:23 compute-1 sudo[181017]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:23 compute-1 sudo[181140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqopppbyffekmohjwkmmdhimvgvttyvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843002.7487342-1646-277176209427683/AnsiballZ_copy.py'
Jan 31 07:03:23 compute-1 sudo[181140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:23 compute-1 python3.9[181142]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769843002.7487342-1646-277176209427683/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:23 compute-1 sudo[181140]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:23 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:03:24 compute-1 sudo[181292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfrruxxxazsxcswrvrezvwcjnvhwjatu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843003.8336298-1646-166066361468035/AnsiballZ_stat.py'
Jan 31 07:03:24 compute-1 sudo[181292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:24 compute-1 ceph-mon[81728]: pgmap v578: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:24 compute-1 python3.9[181294]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:24 compute-1 sudo[181292]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:24 compute-1 sudo[181417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqxzlwsvpnpzrzsdrfxeambexwqkobrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843003.8336298-1646-166066361468035/AnsiballZ_copy.py'
Jan 31 07:03:24 compute-1 sudo[181417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:03:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:24.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:03:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:24.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:24 compute-1 python3.9[181419]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769843003.8336298-1646-166066361468035/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:24 compute-1 sudo[181417]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:25 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:25 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 744 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:03:25 compute-1 sudo[181569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woizyagipcqrghtrvtjpkmhikilewsqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843005.1682632-1985-24051024826417/AnsiballZ_command.py'
Jan 31 07:03:25 compute-1 sudo[181569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:25 compute-1 python3.9[181571]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 31 07:03:25 compute-1 sudo[181569]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:26 compute-1 ceph-mon[81728]: pgmap v579: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:26 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:26 compute-1 sudo[181722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tczoofiuwshbxsntvnynuezoacpplmvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843005.9387832-2012-45432624103407/AnsiballZ_file.py'
Jan 31 07:03:26 compute-1 sudo[181722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:26 compute-1 python3.9[181724]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:26 compute-1 sudo[181722]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:26 compute-1 sudo[181874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tofguponvkbrgjjpvpkjndlzhiwuhilz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843006.4809217-2012-37593454489801/AnsiballZ_file.py'
Jan 31 07:03:26 compute-1 sudo[181874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:26.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:26.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:26 compute-1 python3.9[181876]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:26 compute-1 sudo[181874]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:27 compute-1 sudo[182026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtbysmcconxwyohbvszlrtendrapbuiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843007.0307868-2012-182523823219343/AnsiballZ_file.py'
Jan 31 07:03:27 compute-1 sudo[182026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:27 compute-1 sudo[182029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:03:27 compute-1 sudo[182029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:03:27 compute-1 sudo[182029]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:27 compute-1 sudo[182054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:03:27 compute-1 sudo[182054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:03:27 compute-1 sudo[182054]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:27 compute-1 python3.9[182028]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:27 compute-1 sudo[182026]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:27 compute-1 sudo[182079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:03:27 compute-1 sudo[182079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:03:27 compute-1 sudo[182079]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:27 compute-1 sudo[182104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:03:27 compute-1 sudo[182104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:03:27 compute-1 sudo[182310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyjdbazzrpyurbusldcplrmumbkxukdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843007.6218405-2012-47409424688173/AnsiballZ_file.py'
Jan 31 07:03:27 compute-1 sudo[182310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:27 compute-1 sudo[182104]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:28 compute-1 python3.9[182312]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:28 compute-1 sudo[182310]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:28 compute-1 ceph-mon[81728]: pgmap v580: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:03:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:03:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:03:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:03:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:03:28 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:03:28 compute-1 sudo[182462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afmknxrkppzuhgozufusgrfryoilikos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843008.2115798-2012-28620144771306/AnsiballZ_file.py'
Jan 31 07:03:28 compute-1 sudo[182462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:28 compute-1 python3.9[182464]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:28 compute-1 sudo[182462]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:28.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:03:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:28.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:03:28 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:03:28 compute-1 sudo[182614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flywkqkezmwiijdhpbebevhtetuimkoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843008.736832-2012-267383580158055/AnsiballZ_file.py'
Jan 31 07:03:28 compute-1 sudo[182614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:29 compute-1 python3.9[182616]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:29 compute-1 sudo[182614]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:29 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:29 compute-1 sudo[182779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wznclzgcqhctnjibnyxaxjewxzvttofd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843009.3652608-2012-136022132022256/AnsiballZ_file.py'
Jan 31 07:03:29 compute-1 sudo[182779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:29 compute-1 podman[182740]: 2026-01-31 07:03:29.656678609 +0000 UTC m=+0.071964126 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 07:03:29 compute-1 python3.9[182787]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:29 compute-1 sudo[182779]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:30 compute-1 sudo[182944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-marepwjdcdjoubcjnejfyjcqwojkaxkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843009.951482-2012-131643323925621/AnsiballZ_file.py'
Jan 31 07:03:30 compute-1 sudo[182944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:30 compute-1 ceph-mon[81728]: pgmap v581: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:30 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 749 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:03:30 compute-1 python3.9[182946]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:30 compute-1 sudo[182944]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:30.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:30.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:30 compute-1 sudo[183096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eswnsohynlldqiuljtdizjvfaiwmypkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843010.547738-2012-75419234380277/AnsiballZ_file.py'
Jan 31 07:03:30 compute-1 sudo[183096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:30 compute-1 python3.9[183098]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:30 compute-1 sudo[183096]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:31 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:31 compute-1 sudo[183248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmftprmacnlfgofwvsntpqtcnrxnuycc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843011.0698977-2012-9390167451471/AnsiballZ_file.py'
Jan 31 07:03:31 compute-1 sudo[183248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:31 compute-1 python3.9[183250]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:31 compute-1 sudo[183248]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:31 compute-1 sudo[183400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqqvfrazsesahcfnauaickdbovwbqpbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843011.6607533-2012-21786536834863/AnsiballZ_file.py'
Jan 31 07:03:31 compute-1 sudo[183400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:32 compute-1 python3.9[183402]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:32 compute-1 sudo[183400]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:32 compute-1 ceph-mon[81728]: pgmap v582: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:32 compute-1 sudo[183552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-govbsoduoljemcbwmoriapnryhthshpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843012.254081-2012-82482559334144/AnsiballZ_file.py'
Jan 31 07:03:32 compute-1 sudo[183552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:32 compute-1 python3.9[183554]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:32 compute-1 sudo[183552]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:32.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:32.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:33 compute-1 sudo[183704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwhskwvmsyixuaxkcloliityqjybznbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843012.8238287-2012-243837003503457/AnsiballZ_file.py'
Jan 31 07:03:33 compute-1 sudo[183704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:33 compute-1 python3.9[183706]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:33 compute-1 sudo[183704]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:33 compute-1 sudo[183806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:03:33 compute-1 sudo[183806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:03:33 compute-1 sudo[183806]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:33 compute-1 sudo[183855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:03:33 compute-1 sudo[183855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:03:33 compute-1 sudo[183855]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:33 compute-1 sudo[183905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drvujldtspjbuxgzeswhqmwswibnotdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843013.346913-2012-179108212771943/AnsiballZ_file.py'
Jan 31 07:03:33 compute-1 sudo[183905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:33 compute-1 python3.9[183908]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:33 compute-1 sudo[183905]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:33 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:03:34 compute-1 ceph-mon[81728]: pgmap v583: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:34 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:03:34 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:03:34 compute-1 sudo[184058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afwzvlcwwhchzebawelrjbckxqqqpept ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843014.2954214-2310-209164245688653/AnsiballZ_stat.py'
Jan 31 07:03:34 compute-1 sudo[184058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:34.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:34.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:34 compute-1 python3.9[184060]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:34 compute-1 sudo[184058]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:35 compute-1 sudo[184181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knarspeaayaezrbxxiqkqzeqbggikofc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843014.2954214-2310-209164245688653/AnsiballZ_copy.py'
Jan 31 07:03:35 compute-1 sudo[184181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:35 compute-1 python3.9[184183]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843014.2954214-2310-209164245688653/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:35 compute-1 sudo[184181]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:35 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:03:35 compute-1 sudo[184333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibegmrucpqrvttigtjxvrorecigddffs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843015.4582038-2310-126198023878467/AnsiballZ_stat.py'
Jan 31 07:03:35 compute-1 sudo[184333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:35 compute-1 python3.9[184335]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:35 compute-1 sudo[184333]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:36 compute-1 sudo[184456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjlkrxvqyzukmyhoqsjrhbnyucgosfyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843015.4582038-2310-126198023878467/AnsiballZ_copy.py'
Jan 31 07:03:36 compute-1 sudo[184456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:36 compute-1 python3.9[184458]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843015.4582038-2310-126198023878467/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:36 compute-1 sudo[184456]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:36 compute-1 ceph-mon[81728]: pgmap v584: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:36.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:36.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:36 compute-1 sudo[184608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmyjeyaqcywchppqdeuuxeldifhutpdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843016.5656564-2310-160095756487223/AnsiballZ_stat.py'
Jan 31 07:03:36 compute-1 sudo[184608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:37 compute-1 python3.9[184610]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:37 compute-1 sudo[184608]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:37 compute-1 sudo[184731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgxykdybrmkqhsfxrmtextergcsztcqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843016.5656564-2310-160095756487223/AnsiballZ_copy.py'
Jan 31 07:03:37 compute-1 sudo[184731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:37 compute-1 python3.9[184733]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843016.5656564-2310-160095756487223/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:37 compute-1 sudo[184731]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:37 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:37 compute-1 sudo[184883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alabpolthkalodqffmujsdnsvefqztod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843017.6346533-2310-198344778171534/AnsiballZ_stat.py'
Jan 31 07:03:37 compute-1 sudo[184883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:38 compute-1 python3.9[184885]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:38 compute-1 sudo[184883]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:38 compute-1 sudo[185006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaziffipvurabknfrikwthksbmkfxzdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843017.6346533-2310-198344778171534/AnsiballZ_copy.py'
Jan 31 07:03:38 compute-1 sudo[185006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:38 compute-1 ceph-mon[81728]: pgmap v585: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:38 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:38 compute-1 python3.9[185008]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843017.6346533-2310-198344778171534/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:38 compute-1 sudo[185006]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:03:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:38.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:03:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:38.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:38 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:03:39 compute-1 sudo[185158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pijnovbaxsljsvodeajzlecdvhllvdih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843018.7696693-2310-124784634779424/AnsiballZ_stat.py'
Jan 31 07:03:39 compute-1 sudo[185158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:39 compute-1 python3.9[185160]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:39 compute-1 sudo[185158]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:39 compute-1 sudo[185281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ludtovvvejmoapsunytsceymypwzwfag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843018.7696693-2310-124784634779424/AnsiballZ_copy.py'
Jan 31 07:03:39 compute-1 sudo[185281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:39 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:39 compute-1 python3.9[185283]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843018.7696693-2310-124784634779424/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:39 compute-1 sudo[185281]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:40 compute-1 sudo[185433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebipmodrytpokcytnjfvjwosvxbjdtux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843019.8459122-2310-90682121849963/AnsiballZ_stat.py'
Jan 31 07:03:40 compute-1 sudo[185433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:40 compute-1 python3.9[185435]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:40 compute-1 sudo[185433]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:40 compute-1 sudo[185556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtgjfwozwwounoxyuzxyghmghmvlmehn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843019.8459122-2310-90682121849963/AnsiballZ_copy.py'
Jan 31 07:03:40 compute-1 sudo[185556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:40 compute-1 ceph-mon[81728]: pgmap v586: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:40 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 758 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:03:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:40 compute-1 python3.9[185558]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843019.8459122-2310-90682121849963/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:40 compute-1 sudo[185556]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:40.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:40.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:41 compute-1 sudo[185708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgocrvyoiifdkehfoxrbtoyzrjdkrkab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843020.86612-2310-73840043971557/AnsiballZ_stat.py'
Jan 31 07:03:41 compute-1 sudo[185708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:41 compute-1 python3.9[185710]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:41 compute-1 sudo[185708]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:41 compute-1 sudo[185831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufaavwzokasshgrqobmjhvuwjodiwobl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843020.86612-2310-73840043971557/AnsiballZ_copy.py'
Jan 31 07:03:41 compute-1 sudo[185831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:41 compute-1 ceph-mon[81728]: pgmap v587: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:41 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:41 compute-1 python3.9[185833]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843020.86612-2310-73840043971557/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:41 compute-1 sudo[185831]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:42 compute-1 sudo[185983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgfjgwysrrmdsqfvmusoogetlxytrslb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843021.858418-2310-85857300293547/AnsiballZ_stat.py'
Jan 31 07:03:42 compute-1 sudo[185983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:42 compute-1 python3.9[185985]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:42 compute-1 sudo[185983]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:42 compute-1 sudo[186106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfmtuaqsflroabjsvsbsdesogvnhunfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843021.858418-2310-85857300293547/AnsiballZ_copy.py'
Jan 31 07:03:42 compute-1 sudo[186106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:42.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:42.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:42 compute-1 python3.9[186108]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843021.858418-2310-85857300293547/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:42 compute-1 sudo[186106]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:43 compute-1 sudo[186258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnyssrrsdkgenmdpuuaarcuqpilijryk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843022.9451032-2310-235387135740548/AnsiballZ_stat.py'
Jan 31 07:03:43 compute-1 sudo[186258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:43 compute-1 python3.9[186260]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:43 compute-1 sudo[186258]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:43 compute-1 sudo[186381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lregzlukvngzymbelmiuzbeslspzzmmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843022.9451032-2310-235387135740548/AnsiballZ_copy.py'
Jan 31 07:03:43 compute-1 sudo[186381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:43 compute-1 ceph-mon[81728]: pgmap v588: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:43 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:43 compute-1 python3.9[186383]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843022.9451032-2310-235387135740548/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:43 compute-1 sudo[186381]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:43 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:03:44 compute-1 sudo[186533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chakjpvjpjaadbsebvoqppgfqpmvvbgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843023.9128392-2310-39010329186327/AnsiballZ_stat.py'
Jan 31 07:03:44 compute-1 sudo[186533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:44 compute-1 python3.9[186535]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:44 compute-1 sudo[186533]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:44 compute-1 sudo[186656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpixoufgrjhpykjkifkkgnycscrqpmvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843023.9128392-2310-39010329186327/AnsiballZ_copy.py'
Jan 31 07:03:44 compute-1 sudo[186656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:44.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:44.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:44 compute-1 python3.9[186658]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843023.9128392-2310-39010329186327/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:44 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 763 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:03:44 compute-1 sudo[186656]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:45 compute-1 sudo[186808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aelchftgldkdknyxmhcxnreqqkrjsfby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843024.9123285-2310-216650240203301/AnsiballZ_stat.py'
Jan 31 07:03:45 compute-1 sudo[186808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:45 compute-1 python3.9[186810]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:45 compute-1 sudo[186808]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:45 compute-1 sudo[186931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwppjawarncrcdqzifnyllfdknstkpcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843024.9123285-2310-216650240203301/AnsiballZ_copy.py'
Jan 31 07:03:45 compute-1 sudo[186931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:45 compute-1 python3.9[186933]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843024.9123285-2310-216650240203301/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:45 compute-1 ceph-mon[81728]: pgmap v589: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:45 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:45 compute-1 sudo[186931]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:46 compute-1 sudo[187083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygtvwcgkdnjmohzuizizeztkfmpmjgiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843025.9373674-2310-88809689879970/AnsiballZ_stat.py'
Jan 31 07:03:46 compute-1 sudo[187083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:46 compute-1 python3.9[187085]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:46 compute-1 sudo[187083]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:46 compute-1 sudo[187206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtemfxettkznfaofchtxcaqkwjkczzjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843025.9373674-2310-88809689879970/AnsiballZ_copy.py'
Jan 31 07:03:46 compute-1 sudo[187206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:46.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:46.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:46 compute-1 python3.9[187208]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843025.9373674-2310-88809689879970/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:46 compute-1 sudo[187206]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:47 compute-1 sudo[187358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqrcxlbvflpeblerdapmozhtmezzadlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843026.9920418-2310-188200767345979/AnsiballZ_stat.py'
Jan 31 07:03:47 compute-1 sudo[187358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:47 compute-1 python3.9[187360]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:47 compute-1 sudo[187358]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:47 compute-1 sudo[187481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qybzqpunehiedtfdgqhjrrpdszyvvlan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843026.9920418-2310-188200767345979/AnsiballZ_copy.py'
Jan 31 07:03:47 compute-1 sudo[187481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:47 compute-1 python3.9[187483]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843026.9920418-2310-188200767345979/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:47 compute-1 sudo[187481]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:47 compute-1 ceph-mon[81728]: pgmap v590: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:47 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:48 compute-1 sudo[187633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcblbnbroaysucahegghzpfolsuwzitn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843027.9912763-2310-224015986725970/AnsiballZ_stat.py'
Jan 31 07:03:48 compute-1 sudo[187633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:48 compute-1 python3.9[187635]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:03:48 compute-1 sudo[187633]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:48 compute-1 sudo[187756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhrrzxwjctoeetwuxdmhwmunmaucuxgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843027.9912763-2310-224015986725970/AnsiballZ_copy.py'
Jan 31 07:03:48 compute-1 sudo[187756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:48.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:48.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:48 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:03:48 compute-1 python3.9[187758]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843027.9912763-2310-224015986725970/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:48 compute-1 sudo[187756]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:50 compute-1 python3.9[187908]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:03:50 compute-1 ceph-mon[81728]: pgmap v591: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:50 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:03:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:50.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:50.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:50 compute-1 sudo[188061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyrgpsjayenvmrpqiptkjcgslsmqwuqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843030.5323071-2927-252671138478757/AnsiballZ_seboolean.py'
Jan 31 07:03:50 compute-1 sudo[188061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:51 compute-1 python3.9[188063]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 31 07:03:52 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 31 07:03:52 compute-1 podman[188065]: 2026-01-31 07:03:52.129816563 +0000 UTC m=+0.050175608 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127)
Jan 31 07:03:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:03:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:52.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:03:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:52.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:53 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:53 compute-1 sudo[188061]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:53 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:03:54 compute-1 ceph-mon[81728]: pgmap v592: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:54 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:54 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:54 compute-1 ceph-mon[81728]: pgmap v593: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:54 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:54 compute-1 sudo[188237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hegxmeefymlpscqydioyzhcdulyrkufy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843033.9937317-2951-84481474448107/AnsiballZ_copy.py'
Jan 31 07:03:54 compute-1 sudo[188237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:54 compute-1 python3.9[188239]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:54 compute-1 sudo[188237]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:03:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:54.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:03:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:03:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:54.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:03:54 compute-1 sudo[188389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhjjhjeikmqgklnvndpzrtiaxsogdqlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843034.5882447-2951-270371030479921/AnsiballZ_copy.py'
Jan 31 07:03:54 compute-1 sudo[188389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:55 compute-1 python3.9[188391]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:55 compute-1 sudo[188389]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:55 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:03:55 compute-1 sudo[188543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uajjjvhzhklqjmzrbelnscnodqwpdtvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843035.1986911-2951-122302793862682/AnsiballZ_copy.py'
Jan 31 07:03:55 compute-1 sudo[188543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:55 compute-1 python3.9[188545]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:55 compute-1 sudo[188543]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:55 compute-1 sshd-session[188392]: Received disconnect from 91.224.92.54 port 62386:11:  [preauth]
Jan 31 07:03:55 compute-1 sshd-session[188392]: Disconnected from authenticating user root 91.224.92.54 port 62386 [preauth]
Jan 31 07:03:55 compute-1 sudo[188695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baumbxvrtjckqngdyyswechcbrgkspaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843035.7132356-2951-62466611267089/AnsiballZ_copy.py'
Jan 31 07:03:55 compute-1 sudo[188695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:56 compute-1 python3.9[188697]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:56 compute-1 sudo[188695]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:56 compute-1 ceph-mon[81728]: pgmap v594: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:56 compute-1 sudo[188847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayqwqzhqcqgysqzrrqnkdsunhnmdgaie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843036.2487037-2951-8909550153948/AnsiballZ_copy.py'
Jan 31 07:03:56 compute-1 sudo[188847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:56 compute-1 python3.9[188849]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:56 compute-1 sudo[188847]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:03:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:56.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:03:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:56.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:57 compute-1 sudo[188999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwcofukotabwepydaimdqnzdkjqewbnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843037.0374854-3059-17240294983411/AnsiballZ_copy.py'
Jan 31 07:03:57 compute-1 sudo[188999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:57 compute-1 python3.9[189001]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:57 compute-1 sudo[188999]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:57 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:57 compute-1 sudo[189151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxitjcgnqqeohurkgzqutfxxbvkilvmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843037.6087599-3059-265860959674200/AnsiballZ_copy.py'
Jan 31 07:03:57 compute-1 sudo[189151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:58 compute-1 python3.9[189153]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:58 compute-1 sudo[189151]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:58 compute-1 sudo[189303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aickivjqmzoaejjxksacbmszafuqqfik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843038.2087917-3059-216812721647459/AnsiballZ_copy.py'
Jan 31 07:03:58 compute-1 sudo[189303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:58 compute-1 ceph-mon[81728]: pgmap v595: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:58 compute-1 python3.9[189305]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:03:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:03:58.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:03:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:03:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:03:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:03:58.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:03:59 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:03:59 compute-1 sudo[189303]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:59 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:59 compute-1 ceph-mon[81728]: pgmap v596: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:03:59 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:03:59 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 778 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:03:59 compute-1 sudo[189467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzsjoaouuvssamrbztcfblcigboodedb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843039.7421274-3059-78138495931308/AnsiballZ_copy.py'
Jan 31 07:03:59 compute-1 sudo[189467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:00 compute-1 podman[189430]: 2026-01-31 07:04:00.020654858 +0000 UTC m=+0.077115945 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:04:00 compute-1 python3.9[189473]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:00 compute-1 sudo[189467]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:00 compute-1 sudo[189634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzmhgbntbfvljcmgfontnfxdemgmzdic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843040.3122795-3059-232071855528666/AnsiballZ_copy.py'
Jan 31 07:04:00 compute-1 sudo[189634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:00 compute-1 python3.9[189636]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:00 compute-1 sudo[189634]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:00.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:00.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:01 compute-1 sudo[189786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cemogqdxxulmsdsoywsyyrmdgtqwkqng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843040.8617148-3167-9145241687869/AnsiballZ_systemd.py'
Jan 31 07:04:01 compute-1 sudo[189786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:01 compute-1 python3.9[189788]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:04:01 compute-1 systemd[1]: Reloading.
Jan 31 07:04:01 compute-1 systemd-sysv-generator[189817]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:04:01 compute-1 systemd-rc-local-generator[189811]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:04:01 compute-1 systemd[1]: Starting libvirt logging daemon socket...
Jan 31 07:04:01 compute-1 systemd[1]: Listening on libvirt logging daemon socket.
Jan 31 07:04:01 compute-1 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 31 07:04:01 compute-1 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 31 07:04:01 compute-1 systemd[1]: Starting libvirt logging daemon...
Jan 31 07:04:01 compute-1 ceph-mon[81728]: pgmap v597: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:01 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:01 compute-1 systemd[1]: Started libvirt logging daemon.
Jan 31 07:04:01 compute-1 sudo[189786]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:02 compute-1 sudo[189978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jirvufkavvajidllgzdiecldetsdswmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843041.872301-3167-126852908944457/AnsiballZ_systemd.py'
Jan 31 07:04:02 compute-1 sudo[189978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:02 compute-1 python3.9[189980]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:04:02 compute-1 systemd[1]: Reloading.
Jan 31 07:04:02 compute-1 systemd-rc-local-generator[190008]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:04:02 compute-1 systemd-sysv-generator[190012]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:04:02 compute-1 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 31 07:04:02 compute-1 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 31 07:04:02 compute-1 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 31 07:04:02 compute-1 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 31 07:04:02 compute-1 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 31 07:04:02 compute-1 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 31 07:04:02 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 07:04:02 compute-1 systemd[1]: Started libvirt nodedev daemon.
Jan 31 07:04:02 compute-1 sudo[189978]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:02 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:02.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:02.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:03 compute-1 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 31 07:04:03 compute-1 sudo[190196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-overbvhqddowarvrlrveczudyewpzcia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843042.8539133-3167-24215090789000/AnsiballZ_systemd.py'
Jan 31 07:04:03 compute-1 sudo[190196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:03 compute-1 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 31 07:04:03 compute-1 python3.9[190198]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:04:03 compute-1 systemd[1]: Reloading.
Jan 31 07:04:03 compute-1 systemd-rc-local-generator[190226]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:04:03 compute-1 systemd-sysv-generator[190229]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:04:03 compute-1 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 31 07:04:03 compute-1 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 31 07:04:03 compute-1 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 31 07:04:03 compute-1 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 31 07:04:03 compute-1 systemd[1]: Starting libvirt proxy daemon...
Jan 31 07:04:03 compute-1 systemd[1]: Started libvirt proxy daemon.
Jan 31 07:04:03 compute-1 ceph-mon[81728]: pgmap v598: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:03 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:03 compute-1 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 31 07:04:03 compute-1 sudo[190196]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:03 compute-1 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 31 07:04:04 compute-1 sudo[190417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cryvpzvmbitcoslzaqrpdsuaqvbsiazs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843043.9055023-3167-46569128870105/AnsiballZ_systemd.py'
Jan 31 07:04:04 compute-1 sudo[190417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:04 compute-1 python3.9[190419]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:04:04 compute-1 systemd[1]: Reloading.
Jan 31 07:04:04 compute-1 systemd-sysv-generator[190448]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:04:04 compute-1 systemd-rc-local-generator[190445]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:04:04 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:04:04 compute-1 systemd[1]: Listening on libvirt locking daemon socket.
Jan 31 07:04:04 compute-1 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 31 07:04:04 compute-1 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 31 07:04:04 compute-1 setroubleshoot[190169]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 20264d1d-a7c0-4681-8607-3c59bfaeb396
Jan 31 07:04:04 compute-1 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 31 07:04:04 compute-1 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 31 07:04:04 compute-1 setroubleshoot[190169]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 31 07:04:04 compute-1 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 31 07:04:04 compute-1 setroubleshoot[190169]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 20264d1d-a7c0-4681-8607-3c59bfaeb396
Jan 31 07:04:04 compute-1 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 31 07:04:04 compute-1 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 31 07:04:04 compute-1 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 31 07:04:04 compute-1 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 31 07:04:04 compute-1 setroubleshoot[190169]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 31 07:04:04 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 07:04:04 compute-1 systemd[1]: Started libvirt QEMU daemon.
Jan 31 07:04:04 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:04 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 784 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:04:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:04.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:04.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:04 compute-1 sudo[190417]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:05 compute-1 sudo[190633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fydyexylkssnfuiaduxjgvluopruvjdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843044.9675937-3167-84821769748839/AnsiballZ_systemd.py'
Jan 31 07:04:05 compute-1 sudo[190633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:05 compute-1 python3.9[190635]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:04:05 compute-1 systemd[1]: Reloading.
Jan 31 07:04:05 compute-1 systemd-rc-local-generator[190658]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:04:05 compute-1 systemd-sysv-generator[190663]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:04:05 compute-1 systemd[1]: Starting libvirt secret daemon socket...
Jan 31 07:04:05 compute-1 systemd[1]: Listening on libvirt secret daemon socket.
Jan 31 07:04:05 compute-1 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 31 07:04:05 compute-1 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 31 07:04:05 compute-1 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 31 07:04:05 compute-1 ceph-mon[81728]: pgmap v599: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:05 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:05 compute-1 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 31 07:04:05 compute-1 systemd[1]: Starting libvirt secret daemon...
Jan 31 07:04:05 compute-1 systemd[1]: Started libvirt secret daemon.
Jan 31 07:04:05 compute-1 sudo[190633]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:06.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:06.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:07 compute-1 sudo[190846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghnfmrlhessyghgbjadsoguixlbfkrve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843046.8746305-3278-266402840348042/AnsiballZ_file.py'
Jan 31 07:04:07 compute-1 sudo[190846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:07 compute-1 python3.9[190848]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:07 compute-1 sudo[190846]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:07 compute-1 sudo[190998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aevlfdjpduouwrgvxdvjdjkwbqdtemkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843047.4973695-3302-113828412098084/AnsiballZ_find.py'
Jan 31 07:04:07 compute-1 sudo[190998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:07 compute-1 ceph-mon[81728]: pgmap v600: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:07 compute-1 python3.9[191000]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 07:04:07 compute-1 sudo[190998]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:08 compute-1 sudo[191150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvjudpaulojcrxayhsanxmfvjxphtkgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843048.1234012-3326-61187369914879/AnsiballZ_command.py'
Jan 31 07:04:08 compute-1 sudo[191150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:08 compute-1 python3.9[191152]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:04:08 compute-1 sudo[191150]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:08.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:08.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:09 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:09 compute-1 python3.9[191306]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 07:04:09 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:04:10 compute-1 python3.9[191456]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:04:10 compute-1 ceph-mon[81728]: pgmap v601: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:10 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:10 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:04:10 compute-1 python3.9[191577]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843049.805529-3383-213873109562457/.source.xml follow=False _original_basename=secret.xml.j2 checksum=17d5318e54ac3e2c57aea873011e00a806d508d4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:04:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:10.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:04:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:04:10 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:04:10 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:10.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:04:11 compute-1 sudo[191727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uytudhibpnvsgflsctammkcwnpyprpzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843050.977249-3428-194338996684917/AnsiballZ_command.py'
Jan 31 07:04:11 compute-1 sudo[191727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:11 compute-1 python3.9[191729]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine ef73c6e0-6d85-55c2-9347-1f544d3e3d3a
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:04:11 compute-1 polkitd[43509]: Registered Authentication Agent for unix-process:191731:410785 (system bus name :1.1818 [pkttyagent --process 191731 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 31 07:04:11 compute-1 polkitd[43509]: Unregistered Authentication Agent for unix-process:191731:410785 (system bus name :1.1818, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 31 07:04:11 compute-1 polkitd[43509]: Registered Authentication Agent for unix-process:191730:410785 (system bus name :1.1819 [pkttyagent --process 191730 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 31 07:04:11 compute-1 polkitd[43509]: Unregistered Authentication Agent for unix-process:191730:410785 (system bus name :1.1819, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 31 07:04:11 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:11 compute-1 sudo[191727]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:12 compute-1 python3.9[191891]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:12 compute-1 ceph-mon[81728]: pgmap v602: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:12 compute-1 sudo[192041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kykvshancxayrjfvwtoircmkjurgwpkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843052.5573628-3476-2686572211824/AnsiballZ_command.py'
Jan 31 07:04:12 compute-1 sudo[192041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:04:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:12.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:12 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:04:12 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:12.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:04:12 compute-1 sudo[192041]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:13 compute-1 sudo[192194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkpqwolojotacgagpxuxyfdoibxllvbw ; FSID=ef73c6e0-6d85-55c2-9347-1f544d3e3d3a KEY=AQAnpX1pAAAAABAAzTaottZ9ZAhzIerr7s6NMg== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843053.1571405-3500-53386905771761/AnsiballZ_command.py'
Jan 31 07:04:13 compute-1 sudo[192194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:13 compute-1 polkitd[43509]: Registered Authentication Agent for unix-process:192197:411000 (system bus name :1.1822 [pkttyagent --process 192197 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 31 07:04:13 compute-1 polkitd[43509]: Unregistered Authentication Agent for unix-process:192197:411000 (system bus name :1.1822, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 31 07:04:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:13 compute-1 sudo[192194]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:14 compute-1 sudo[192352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhzyyenvzizsvfyuqyhedzsgdnfbhwep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843053.9701004-3524-14211673136381/AnsiballZ_copy.py'
Jan 31 07:04:14 compute-1 sudo[192352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:14 compute-1 python3.9[192354]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:14 compute-1 sudo[192352]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:14 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:04:14 compute-1 ceph-mon[81728]: pgmap v603: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:14 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 794 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:04:14 compute-1 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 31 07:04:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:14.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:04:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:14.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:04:14 compute-1 sudo[192504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seolubcrjweulpwoekoviyffvsdzgsrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843054.5808454-3548-5442754444631/AnsiballZ_stat.py'
Jan 31 07:04:14 compute-1 sudo[192504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:14 compute-1 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 31 07:04:14 compute-1 python3.9[192506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:04:15 compute-1 sudo[192504]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:15 compute-1 sudo[192627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfejyhjznkvewrnjqpusjldznfsbhszi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843054.5808454-3548-5442754444631/AnsiballZ_copy.py'
Jan 31 07:04:15 compute-1 sudo[192627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:15 compute-1 python3.9[192629]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843054.5808454-3548-5442754444631/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:15 compute-1 sudo[192627]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:15 compute-1 ceph-mon[81728]: pgmap v604: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:15 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:16 compute-1 sudo[192779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teerwwkozltzgedbjkhuavllvxxnmqdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843056.0109658-3596-172402368727894/AnsiballZ_file.py'
Jan 31 07:04:16 compute-1 sudo[192779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:16 compute-1 python3.9[192781]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:16 compute-1 sudo[192779]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:04:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:16 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:16.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:16 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:16.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:16 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:16 compute-1 sudo[192931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haduguxregdngccvzyavmmcfwcjouohw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843056.639763-3620-20310412368360/AnsiballZ_stat.py'
Jan 31 07:04:16 compute-1 sudo[192931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:17 compute-1 python3.9[192933]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:04:17 compute-1 sudo[192931]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:17 compute-1 sudo[193009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izzlyggehlzbcjocuqmiphosbefnaaxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843056.639763-3620-20310412368360/AnsiballZ_file.py'
Jan 31 07:04:17 compute-1 sudo[193009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:17 compute-1 python3.9[193011]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:17 compute-1 sudo[193009]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:17 compute-1 ceph-mon[81728]: pgmap v605: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:17 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:17 compute-1 sudo[193161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoyyapzzipjssyymkxdujywkszbwmgnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843057.7558646-3656-57893481818157/AnsiballZ_stat.py'
Jan 31 07:04:17 compute-1 sudo[193161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:18 compute-1 python3.9[193163]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:04:18 compute-1 sudo[193161]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:18 compute-1 sudo[193239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfbejflhjqeqxyittuxdydcnbqoinejv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843057.7558646-3656-57893481818157/AnsiballZ_file.py'
Jan 31 07:04:18 compute-1 sudo[193239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:18 compute-1 python3.9[193241]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.1r_aov6w recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:18 compute-1 sudo[193239]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:18.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:04:18 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:04:18 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:18.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:04:18 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:18 compute-1 sudo[193391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfsznhkrfztuyygazkaruhgiuawpzdps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843058.7640586-3692-160197631455512/AnsiballZ_stat.py'
Jan 31 07:04:18 compute-1 sudo[193391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:19 compute-1 python3.9[193393]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:04:19 compute-1 sudo[193391]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:19 compute-1 sudo[193469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccipxoxsjrcayovrzpgtoqttqmamqxfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843058.7640586-3692-160197631455512/AnsiballZ_file.py'
Jan 31 07:04:19 compute-1 sudo[193469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:19 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:04:19 compute-1 python3.9[193471]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:19 compute-1 sudo[193469]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:04:19.879 140083 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:04:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:04:19.880 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:04:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:04:19.880 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:04:19 compute-1 ceph-mon[81728]: pgmap v606: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:19 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:19 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:04:20 compute-1 sudo[193621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxygujuroddebvynfwhzalgbxlyhfjnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843059.9125931-3731-53728348993081/AnsiballZ_command.py'
Jan 31 07:04:20 compute-1 sudo[193621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:20 compute-1 python3.9[193623]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:04:20 compute-1 sudo[193621]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:20.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:20.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:20 compute-1 sudo[193774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nskthsyvuzivufxkqglmyckfcwzhixwp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769843060.558259-3756-255351745767898/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 07:04:20 compute-1 sudo[193774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:21 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:21 compute-1 python3[193776]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 07:04:21 compute-1 sudo[193774]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:22 compute-1 sudo[193926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxxebvkbhdziwnedkwqtiswcfvjmvupe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843061.8259015-3779-196860815412850/AnsiballZ_stat.py'
Jan 31 07:04:22 compute-1 sudo[193926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:22 compute-1 ceph-mon[81728]: pgmap v607: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:22 compute-1 python3.9[193928]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:04:22 compute-1 sudo[193926]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:22 compute-1 auditd[701]: Audit daemon rotating log files
Jan 31 07:04:22 compute-1 sudo[194012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyzzunyrfdldtltkzjquydgiiizynvax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843061.8259015-3779-196860815412850/AnsiballZ_file.py'
Jan 31 07:04:22 compute-1 sudo[194012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:22 compute-1 podman[193978]: 2026-01-31 07:04:22.565657507 +0000 UTC m=+0.079220673 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:04:22 compute-1 python3.9[194026]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:22 compute-1 sudo[194012]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:04:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:22.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:04:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:22.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:23 compute-1 sudo[194176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzfflrjakyecuqkykikxxqszjugpxszu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843063.005117-3815-204988324260129/AnsiballZ_stat.py'
Jan 31 07:04:23 compute-1 sudo[194176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:23 compute-1 python3.9[194178]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:04:23 compute-1 sudo[194176]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:23 compute-1 sudo[194301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkaxhwgajrobleccsoqmhctzuuveixnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843063.005117-3815-204988324260129/AnsiballZ_copy.py'
Jan 31 07:04:23 compute-1 sudo[194301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:24 compute-1 python3.9[194303]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843063.005117-3815-204988324260129/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:24 compute-1 sudo[194301]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:24 compute-1 sudo[194453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgdtjwmrucgkdsiizgfofhagidicwgwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843064.2724926-3860-227070355241932/AnsiballZ_stat.py'
Jan 31 07:04:24 compute-1 sudo[194453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:24 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:04:24 compute-1 ceph-mon[81728]: pgmap v608: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:24 compute-1 python3.9[194455]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:04:24 compute-1 sudo[194453]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:24.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:24.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:24 compute-1 sudo[194531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlhzshizwqgwlvknhtlykhelisfkkltt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843064.2724926-3860-227070355241932/AnsiballZ_file.py'
Jan 31 07:04:24 compute-1 sudo[194531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:25 compute-1 python3.9[194533]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:25 compute-1 sudo[194531]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:25 compute-1 sudo[194683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcmcgknuunzjxyaxtqjmgeouukzxhblj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843065.418821-3896-98798016764318/AnsiballZ_stat.py'
Jan 31 07:04:25 compute-1 sudo[194683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:25 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:04:25 compute-1 ceph-mon[81728]: pgmap v609: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:25 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:25 compute-1 python3.9[194685]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:04:25 compute-1 sudo[194683]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:26 compute-1 sudo[194761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwlbxbucecoclcopngwovjqahtbdtzmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843065.418821-3896-98798016764318/AnsiballZ_file.py'
Jan 31 07:04:26 compute-1 sudo[194761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:26 compute-1 python3.9[194763]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:26 compute-1 sudo[194761]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:04:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:26.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:26 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:26 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:26.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:26 compute-1 sudo[194913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czwvrqdlklxipfxdcalhcrootbdvxwsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843066.5721536-3932-131825116746924/AnsiballZ_stat.py'
Jan 31 07:04:26 compute-1 sudo[194913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:27 compute-1 python3.9[194915]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:04:27 compute-1 sudo[194913]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:27 compute-1 sudo[195038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhctzkfbolbjjlluyhbxqvqugdlfhvzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843066.5721536-3932-131825116746924/AnsiballZ_copy.py'
Jan 31 07:04:27 compute-1 sudo[195038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:27 compute-1 python3.9[195040]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843066.5721536-3932-131825116746924/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:27 compute-1 sudo[195038]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:28 compute-1 ceph-mon[81728]: pgmap v610: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:28 compute-1 sudo[195190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elcnkgosgnumvrfphesmezczjbwnehnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843067.9171612-3977-4781346248370/AnsiballZ_file.py'
Jan 31 07:04:28 compute-1 sudo[195190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:28 compute-1 python3.9[195192]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:28 compute-1 sudo[195190]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:28 compute-1 sudo[195342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juqyyezjmlhsguunhoqkdiomczfbdgih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843068.5113924-4001-203636536363201/AnsiballZ_command.py'
Jan 31 07:04:28 compute-1 sudo[195342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:04:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:28.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:04:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:28.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:28 compute-1 python3.9[195344]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:04:28 compute-1 sudo[195342]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:29 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:29 compute-1 sudo[195497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chvdatxvirsaqjrpbvyuwgmamwuemdzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843069.204697-4025-47442997072886/AnsiballZ_blockinfile.py'
Jan 31 07:04:29 compute-1 sudo[195497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:29 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:04:29 compute-1 python3.9[195499]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:29 compute-1 sudo[195497]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:30 compute-1 podman[195524]: 2026-01-31 07:04:30.141927809 +0000 UTC m=+0.064169615 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:04:30 compute-1 ceph-mon[81728]: pgmap v611: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:30 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:04:30 compute-1 sudo[195676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sswxoxpzkuevlwuxwlxvycbpimwyvzuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843070.2114966-4052-122749596833727/AnsiballZ_command.py'
Jan 31 07:04:30 compute-1 sudo[195676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:30 compute-1 python3.9[195678]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:04:30 compute-1 sudo[195676]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:04:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:30.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:30 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:04:30 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:30.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:04:31 compute-1 sudo[195829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqdgliyyjjqieimgowuutkfkuwftegja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843070.9031558-4076-204574314608293/AnsiballZ_stat.py'
Jan 31 07:04:31 compute-1 sudo[195829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:31 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:31 compute-1 python3.9[195831]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:04:31 compute-1 sudo[195829]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:31 compute-1 sudo[195983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkggmykrloydvpwqvtvkuncqcfiavygx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843071.6167762-4101-208875985951769/AnsiballZ_command.py'
Jan 31 07:04:31 compute-1 sudo[195983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:32 compute-1 python3.9[195985]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:04:32 compute-1 sudo[195983]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:32 compute-1 ceph-mon[81728]: pgmap v612: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:32 compute-1 sudo[196138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eypygbenkkmlzvikbqlxgvqhxwbmgzje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843072.3094804-4124-272842763308954/AnsiballZ_file.py'
Jan 31 07:04:32 compute-1 sudo[196138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:32 compute-1 python3.9[196140]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:32 compute-1 sudo[196138]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:04:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:32.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:32 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:32 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:32.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:33 compute-1 sudo[196290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kobfbdcqjskkgpyiheoomkkrnaomvleq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843072.9055355-4149-212115254839120/AnsiballZ_stat.py'
Jan 31 07:04:33 compute-1 sudo[196290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:33 compute-1 python3.9[196292]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:04:33 compute-1 sudo[196290]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:33 compute-1 sudo[196387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:04:33 compute-1 sudo[196387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:04:33 compute-1 sudo[196387]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:33 compute-1 sudo[196437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkybnxubnmjhexvvzoyotylbmlzjrabf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843072.9055355-4149-212115254839120/AnsiballZ_copy.py'
Jan 31 07:04:33 compute-1 sudo[196437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:33 compute-1 sudo[196439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:04:33 compute-1 sudo[196439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:04:33 compute-1 sudo[196439]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:33 compute-1 sudo[196466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:04:33 compute-1 sudo[196466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:04:33 compute-1 sudo[196466]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:33 compute-1 sudo[196491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:04:33 compute-1 sudo[196491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:04:33 compute-1 python3.9[196444]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843072.9055355-4149-212115254839120/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:33 compute-1 sudo[196437]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:34 compute-1 podman[196612]: 2026-01-31 07:04:34.541978488 +0000 UTC m=+0.383478668 container exec 6672f2dbf6180479da9a6d4a3be2a0622c308e279fdc39713e442740f41af4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:04:34 compute-1 ceph-mon[81728]: pgmap v613: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:34 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:04:34 compute-1 podman[196612]: 2026-01-31 07:04:34.815480692 +0000 UTC m=+0.656980872 container exec_died 6672f2dbf6180479da9a6d4a3be2a0622c308e279fdc39713e442740f41af4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:04:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:34.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:34.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:34 compute-1 sudo[196769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhdtqkvuovnvarotssywttbtielbijic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843074.658823-4193-57221247874942/AnsiballZ_stat.py'
Jan 31 07:04:34 compute-1 sudo[196769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:35 compute-1 python3.9[196771]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:04:35 compute-1 sudo[196769]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:35 compute-1 sudo[196491]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:35 compute-1 sudo[196982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwfpyjhehzatkwiusrvcjjshkhjhnoot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843074.658823-4193-57221247874942/AnsiballZ_copy.py'
Jan 31 07:04:35 compute-1 sudo[196982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:35 compute-1 sudo[196985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:04:35 compute-1 sudo[196985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:04:35 compute-1 sudo[196985]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:35 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:04:35 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:04:35 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:04:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:35 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:04:35 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:04:35 compute-1 sudo[197010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:04:35 compute-1 sudo[197010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:04:35 compute-1 sudo[197010]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:35 compute-1 sudo[197035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:04:35 compute-1 sudo[197035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:04:35 compute-1 sudo[197035]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:35 compute-1 sudo[197060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:04:35 compute-1 sudo[197060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:04:35 compute-1 python3.9[196984]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843074.658823-4193-57221247874942/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:35 compute-1 sudo[196982]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:35 compute-1 sudo[197060]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:36 compute-1 sudo[197265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sckltgojyikxaxhcrybrznwgirototfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843075.9702666-4238-103015161955930/AnsiballZ_stat.py'
Jan 31 07:04:36 compute-1 sudo[197265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:36 compute-1 python3.9[197267]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:04:36 compute-1 sudo[197265]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:36 compute-1 ceph-mon[81728]: pgmap v614: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:36 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:04:36 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:04:36 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:04:36 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:04:36 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:04:36 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:04:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:36 compute-1 sudo[197388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnuwiyghyqsdazrhtxkpghpmhbcloywk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843075.9702666-4238-103015161955930/AnsiballZ_copy.py'
Jan 31 07:04:36 compute-1 sudo[197388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:04:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:36.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:36 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:04:36 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:36.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:04:36 compute-1 python3.9[197390]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843075.9702666-4238-103015161955930/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:36 compute-1 sudo[197388]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:37 compute-1 sudo[197540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-squgafqxexsozrjvbngkzqtoqioqxhny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843077.239977-4284-3037421229557/AnsiballZ_systemd.py'
Jan 31 07:04:37 compute-1 sudo[197540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:37 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:37 compute-1 python3.9[197542]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:04:37 compute-1 systemd[1]: Reloading.
Jan 31 07:04:37 compute-1 systemd-rc-local-generator[197570]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:04:37 compute-1 systemd-sysv-generator[197574]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:04:38 compute-1 systemd[1]: Reached target edpm_libvirt.target.
Jan 31 07:04:38 compute-1 sudo[197540]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:38 compute-1 sudo[197732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdmofgebwlkbvtjhtqsfjwesymztbfci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843078.3173168-4307-171997730402425/AnsiballZ_systemd.py'
Jan 31 07:04:38 compute-1 sudo[197732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:38 compute-1 ceph-mon[81728]: pgmap v615: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:38 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:38 compute-1 python3.9[197734]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 07:04:38 compute-1 systemd[1]: Reloading.
Jan 31 07:04:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:04:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:38 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:04:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:38 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:38.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:04:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:38.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:38 compute-1 systemd-rc-local-generator[197760]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:04:38 compute-1 systemd-sysv-generator[197764]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:04:39 compute-1 systemd[1]: Reloading.
Jan 31 07:04:39 compute-1 systemd-sysv-generator[197802]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:04:39 compute-1 systemd-rc-local-generator[197798]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:04:39 compute-1 sudo[197732]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:39.604705) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843079604746, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1683, "num_deletes": 251, "total_data_size": 3093627, "memory_usage": 3132512, "flush_reason": "Manual Compaction"}
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843079639157, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 2020588, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16024, "largest_seqno": 17702, "table_properties": {"data_size": 2014165, "index_size": 3306, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15531, "raw_average_key_size": 19, "raw_value_size": 2000004, "raw_average_value_size": 2509, "num_data_blocks": 146, "num_entries": 797, "num_filter_entries": 797, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842970, "oldest_key_time": 1769842970, "file_creation_time": 1769843079, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 34519 microseconds, and 4005 cpu microseconds.
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:39.639223) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 2020588 bytes OK
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:39.639239) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:39.649190) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:39.649259) EVENT_LOG_v1 {"time_micros": 1769843079649244, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:39.649296) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 3085771, prev total WAL file size 3101490, number of live WAL files 2.
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:39.650349) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(1973KB)], [27(9492KB)]
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843079650396, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 11741115, "oldest_snapshot_seqno": -1}
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 5379 keys, 11192395 bytes, temperature: kUnknown
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843079815759, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 11192395, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11154421, "index_size": 23400, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 136335, "raw_average_key_size": 25, "raw_value_size": 11054801, "raw_average_value_size": 2055, "num_data_blocks": 958, "num_entries": 5379, "num_filter_entries": 5379, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842193, "oldest_key_time": 0, "file_creation_time": 1769843079, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:39.816314) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 11192395 bytes
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:39.821185) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 70.8 rd, 67.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.3 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(11.3) write-amplify(5.5) OK, records in: 5896, records dropped: 517 output_compression: NoCompression
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:39.821210) EVENT_LOG_v1 {"time_micros": 1769843079821199, "job": 14, "event": "compaction_finished", "compaction_time_micros": 165766, "compaction_time_cpu_micros": 18155, "output_level": 6, "num_output_files": 1, "total_output_size": 11192395, "num_input_records": 5896, "num_output_records": 5379, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843079821647, "job": 14, "event": "table_file_deletion", "file_number": 29}
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843079822547, "job": 14, "event": "table_file_deletion", "file_number": 27}
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:39.650290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:39.822672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:39.822680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:39.822683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:39.822686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:04:39 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:39.822689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:04:39 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:04:39 compute-1 ceph-mon[81728]: pgmap v616: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:39 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:39 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:04:40 compute-1 sshd-session[140665]: Connection closed by 192.168.122.30 port 47248
Jan 31 07:04:40 compute-1 sshd-session[140662]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:04:40 compute-1 systemd[1]: session-48.scope: Deactivated successfully.
Jan 31 07:04:40 compute-1 systemd[1]: session-48.scope: Consumed 2min 57.060s CPU time.
Jan 31 07:04:40 compute-1 systemd-logind[788]: Session 48 logged out. Waiting for processes to exit.
Jan 31 07:04:40 compute-1 systemd-logind[788]: Removed session 48.
Jan 31 07:04:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:04:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:40.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:40 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:40 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:40.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:41 compute-1 ceph-mon[81728]: pgmap v617: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:41 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:04:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:42.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:42 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:04:42 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:42.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:04:43 compute-1 sudo[197832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:04:43 compute-1 sudo[197832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:04:43 compute-1 sudo[197832]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:43 compute-1 sudo[197857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:04:43 compute-1 sudo[197857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:04:43 compute-1 sudo[197857]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:43 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:43 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:04:43 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:04:44 compute-1 ceph-mon[81728]: pgmap v618: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:44 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:04:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:04:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:04:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:44.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:04:44 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:04:44 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:44.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:04:45 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:04:45 compute-1 ceph-mon[81728]: pgmap v619: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:45 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:45 compute-1 sshd-session[197882]: Accepted publickey for zuul from 192.168.122.30 port 51202 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 07:04:46 compute-1 systemd-logind[788]: New session 49 of user zuul.
Jan 31 07:04:46 compute-1 systemd[1]: Started Session 49 of User zuul.
Jan 31 07:04:46 compute-1 sshd-session[197882]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:04:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:04:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:46 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:46.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:46 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:46.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:47 compute-1 python3.9[198035]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:04:47 compute-1 ceph-mon[81728]: pgmap v620: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:47 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:48 compute-1 python3.9[198189]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:04:48 compute-1 network[198206]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:04:48 compute-1 network[198207]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:04:48 compute-1 network[198208]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:04:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:04:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:48.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:48 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:04:48 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:48.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:04:48 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:49 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:04:49 compute-1 ceph-mon[81728]: pgmap v621: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:49 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:04:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:04:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:04:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:50.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:04:50 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:04:50 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:50.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:04:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:50 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Jan 31 07:04:50 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:50.996682) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:04:50 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Jan 31 07:04:50 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843090996761, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 426, "num_deletes": 251, "total_data_size": 453843, "memory_usage": 462768, "flush_reason": "Manual Compaction"}
Jan 31 07:04:50 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843091003777, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 298528, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17707, "largest_seqno": 18128, "table_properties": {"data_size": 296117, "index_size": 511, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6229, "raw_average_key_size": 19, "raw_value_size": 291190, "raw_average_value_size": 895, "num_data_blocks": 22, "num_entries": 325, "num_filter_entries": 325, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843079, "oldest_key_time": 1769843079, "file_creation_time": 1769843090, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 7124 microseconds, and 1713 cpu microseconds.
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:51.003821) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 298528 bytes OK
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:51.003838) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:51.007357) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:51.007384) EVENT_LOG_v1 {"time_micros": 1769843091007376, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:51.007404) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 451114, prev total WAL file size 451114, number of live WAL files 2.
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:51.007792) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(291KB)], [30(10MB)]
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843091007827, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 11490923, "oldest_snapshot_seqno": -1}
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 5191 keys, 9867169 bytes, temperature: kUnknown
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843091108946, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 9867169, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9831444, "index_size": 21663, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12997, "raw_key_size": 133177, "raw_average_key_size": 25, "raw_value_size": 9735841, "raw_average_value_size": 1875, "num_data_blocks": 882, "num_entries": 5191, "num_filter_entries": 5191, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842193, "oldest_key_time": 0, "file_creation_time": 1769843091, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:51.109200) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 9867169 bytes
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:51.111934) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.5 rd, 97.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.7 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(71.5) write-amplify(33.1) OK, records in: 5704, records dropped: 513 output_compression: NoCompression
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:51.111980) EVENT_LOG_v1 {"time_micros": 1769843091111965, "job": 16, "event": "compaction_finished", "compaction_time_micros": 101204, "compaction_time_cpu_micros": 19461, "output_level": 6, "num_output_files": 1, "total_output_size": 9867169, "num_input_records": 5704, "num_output_records": 5191, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843091112167, "job": 16, "event": "table_file_deletion", "file_number": 32}
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843091113187, "job": 16, "event": "table_file_deletion", "file_number": 30}
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:51.007719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:51.113265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:51.113271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:51.113272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:51.113274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:04:51 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:04:51.113275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:04:52 compute-1 ceph-mon[81728]: pgmap v622: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:52 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:52 compute-1 podman[198326]: 2026-01-31 07:04:52.664734339 +0000 UTC m=+0.051915204 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 07:04:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:04:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:52.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:04:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:52.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:53 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:53 compute-1 sudo[198496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plbeukbbiheuqvauypdmmcitxialewnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843093.105905-102-29159222536665/AnsiballZ_setup.py'
Jan 31 07:04:53 compute-1 sudo[198496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:53 compute-1 python3.9[198498]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:04:53 compute-1 sudo[198496]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:54 compute-1 ceph-mon[81728]: pgmap v623: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:54 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:54 compute-1 sudo[198580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfsggwpygvqpedlldtoijermbypnllwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843093.105905-102-29159222536665/AnsiballZ_dnf.py'
Jan 31 07:04:54 compute-1 sudo[198580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:54 compute-1 python3.9[198582]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:04:54 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:04:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:04:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:54.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:04:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:04:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:54.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:04:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:55 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:04:56 compute-1 ceph-mon[81728]: pgmap v624: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:56.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:04:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:56.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:04:57 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:04:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:04:58.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:04:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:04:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:04:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:04:58.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:04:58 compute-1 ceph-mon[81728]: pgmap v625: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:04:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:04:59 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:00 compute-1 ceph-mon[81728]: pgmap v626: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:00 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:05:00 compute-1 sudo[198580]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:00.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:00.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:01 compute-1 podman[198673]: 2026-01-31 07:05:01.178626918 +0000 UTC m=+0.069073238 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Jan 31 07:05:01 compute-1 sudo[198759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvjermzlkridbzigxtyawqpsalfytxyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843100.7890346-138-233635038429469/AnsiballZ_stat.py'
Jan 31 07:05:01 compute-1 sudo[198759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:01 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:01 compute-1 python3.9[198761]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:05:01 compute-1 sudo[198759]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:02 compute-1 sudo[198911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffqkbyjjmmjmzklfchovjiwpxvgbdmwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843101.8038652-168-972064349648/AnsiballZ_command.py'
Jan 31 07:05:02 compute-1 sudo[198911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:02 compute-1 python3.9[198913]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:05:02 compute-1 sudo[198911]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:02 compute-1 ceph-mon[81728]: pgmap v627: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:02 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:02.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:02.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:03 compute-1 sudo[199064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viquvzahcfpmvgvytkxgigjbuzsdpemr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843102.8922188-198-99911833529854/AnsiballZ_stat.py'
Jan 31 07:05:03 compute-1 sudo[199064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:03 compute-1 python3.9[199066]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:05:03 compute-1 sudo[199064]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:03 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:03 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:03 compute-1 sudo[199216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwzxjdcvcqgdkraydbmmwkwrnirooxgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843103.5883791-222-234486246978439/AnsiballZ_command.py'
Jan 31 07:05:03 compute-1 sudo[199216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:03 compute-1 python3.9[199218]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:05:04 compute-1 sudo[199216]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:04 compute-1 sudo[199369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkdtsfjpqamdyfxxmcxtpyfesdkqzpif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843104.1879678-246-129129847040401/AnsiballZ_stat.py'
Jan 31 07:05:04 compute-1 sudo[199369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:04 compute-1 python3.9[199371]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:05:04 compute-1 sudo[199369]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:04 compute-1 ceph-mon[81728]: pgmap v628: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:04 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:04 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:05:04 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:04.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:05:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:04.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:05:05 compute-1 sudo[199492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rddmyaxiucelkhglkgniqykxwqmrqshm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843104.1879678-246-129129847040401/AnsiballZ_copy.py'
Jan 31 07:05:05 compute-1 sudo[199492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:05 compute-1 python3.9[199494]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843104.1879678-246-129129847040401/.source.iscsi _original_basename=._94hvon1 follow=False checksum=ecd6d93ca62e531390959c4d54f1112a3ee79a3b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:05:05 compute-1 sudo[199492]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:05 compute-1 ceph-mon[81728]: pgmap v629: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:05 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:05 compute-1 sudo[199644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvcbjcqrittkfpejbmhyfroteyrtqcct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843105.4430494-291-41622782288205/AnsiballZ_file.py'
Jan 31 07:05:05 compute-1 sudo[199644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:05 compute-1 python3.9[199646]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:05:06 compute-1 sudo[199644]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:06 compute-1 sudo[199796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkozfgsarbrlcabrhqitrzxoamabbfzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843106.2615054-315-47778585982045/AnsiballZ_lineinfile.py'
Jan 31 07:05:06 compute-1 sudo[199796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:06 compute-1 python3.9[199798]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:05:06 compute-1 sudo[199796]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:06 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:05:06 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:05:06 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:05:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:05:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:06.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:06 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:05:06 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:06.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:05:07 compute-1 ceph-mon[81728]: pgmap v630: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:07 compute-1 sudo[199949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urzwaibknqctvjkeobwimjfimtuidanw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843107.1932573-342-64377648778848/AnsiballZ_systemd_service.py'
Jan 31 07:05:07 compute-1 sudo[199949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:08 compute-1 python3.9[199951]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:05:08 compute-1 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 31 07:05:08 compute-1 sudo[199949]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:08 compute-1 sudo[200105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxuuwznfydncimkctlcgcvjfssjplqvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843108.3935723-366-17386286144488/AnsiballZ_systemd_service.py'
Jan 31 07:05:08 compute-1 sudo[200105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:08 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:05:08 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:08 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:08.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:08.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:08 compute-1 python3.9[200107]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:05:08 compute-1 systemd[1]: Reloading.
Jan 31 07:05:09 compute-1 systemd-rc-local-generator[200131]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:05:09 compute-1 systemd-sysv-generator[200136]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:05:09 compute-1 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 07:05:09 compute-1 systemd[1]: Starting Open-iSCSI...
Jan 31 07:05:09 compute-1 kernel: Loading iSCSI transport class v2.0-870.
Jan 31 07:05:09 compute-1 systemd[1]: Started Open-iSCSI.
Jan 31 07:05:09 compute-1 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 31 07:05:09 compute-1 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 31 07:05:09 compute-1 sudo[200105]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:09 compute-1 ceph-mon[81728]: pgmap v631: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:09 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:09 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:05:09 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:10 compute-1 python3.9[200305]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:05:10 compute-1 network[200322]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:05:10 compute-1 network[200323]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:05:10 compute-1 network[200324]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:05:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:05:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:10 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:10.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:10 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:10.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:10 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:11 compute-1 ceph-mon[81728]: pgmap v632: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:11 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:12.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:05:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:12.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:05:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:13 compute-1 sudo[200594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edduomrikvfnpfcbnfcglzgtjsqavqrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843113.2321413-435-251274932585476/AnsiballZ_dnf.py'
Jan 31 07:05:13 compute-1 sudo[200594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:13 compute-1 python3.9[200596]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:05:14 compute-1 ceph-mon[81728]: pgmap v633: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:14 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:05:14 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:14 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:14.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:14.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:15 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:15 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:05:16 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:05:16 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:05:16 compute-1 systemd[1]: Reloading.
Jan 31 07:05:16 compute-1 systemd-rc-local-generator[200641]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:05:16 compute-1 systemd-sysv-generator[200646]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:05:16 compute-1 ceph-mon[81728]: pgmap v634: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:16 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:16 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:05:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:05:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:16.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:16 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:05:16 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:16.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:05:17 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:17 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:17 compute-1 sudo[200594]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:18 compute-1 sudo[200911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsvniqilvyfovjzuscswzmkplrjxurnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843118.12503-462-200315308315928/AnsiballZ_file.py'
Jan 31 07:05:18 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:05:18 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:05:18 compute-1 sudo[200911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:18 compute-1 systemd[1]: run-r79e27c4fd79c4ec4b7b97c95e597521b.service: Deactivated successfully.
Jan 31 07:05:18 compute-1 python3.9[200914]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 07:05:18 compute-1 sudo[200911]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:18 compute-1 ceph-mon[81728]: pgmap v635: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:18 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:18.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:05:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:18.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:05:19 compute-1 sudo[201064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbmjnomnrdkigvqiyerfduxicfcpnqdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843118.8296685-486-77093477471431/AnsiballZ_modprobe.py'
Jan 31 07:05:19 compute-1 sudo[201064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:19 compute-1 python3.9[201066]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 31 07:05:19 compute-1 sudo[201064]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:19 compute-1 ceph-mon[81728]: pgmap v636: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:19 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:05:19 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:05:19.880 140083 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:05:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:05:19.881 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:05:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:05:19.882 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:05:20 compute-1 sudo[201220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsfkuolugbgfcstbwntjxqpbnufmdtpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843119.6240983-510-272295701374279/AnsiballZ_stat.py'
Jan 31 07:05:20 compute-1 sudo[201220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:20 compute-1 python3.9[201222]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:05:20 compute-1 sudo[201220]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:20 compute-1 sudo[201343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-layyzoufjbxqmacyluygsepcufpezvzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843119.6240983-510-272295701374279/AnsiballZ_copy.py'
Jan 31 07:05:20 compute-1 sudo[201343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:20 compute-1 python3.9[201345]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843119.6240983-510-272295701374279/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:05:20 compute-1 sudo[201343]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:20 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:20.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:20.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:21 compute-1 sudo[201495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-havzcdveobgknibdxafsoorcwvimcjst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843121.04314-558-11675390216058/AnsiballZ_lineinfile.py'
Jan 31 07:05:21 compute-1 sudo[201495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:21 compute-1 python3.9[201497]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:05:21 compute-1 sudo[201495]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:21 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:21 compute-1 ceph-mon[81728]: pgmap v637: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:22 compute-1 sudo[201647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cptvfjtaahuawngoutcqwvcgtnrelpqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843121.7300758-582-70022351582/AnsiballZ_systemd.py'
Jan 31 07:05:22 compute-1 sudo[201647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:22 compute-1 python3.9[201649]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:05:22 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 07:05:22 compute-1 systemd[1]: Stopped Load Kernel Modules.
Jan 31 07:05:22 compute-1 systemd[1]: Stopping Load Kernel Modules...
Jan 31 07:05:22 compute-1 systemd[1]: Starting Load Kernel Modules...
Jan 31 07:05:22 compute-1 systemd[1]: Finished Load Kernel Modules.
Jan 31 07:05:22 compute-1 sudo[201647]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:22 compute-1 podman[201651]: 2026-01-31 07:05:22.836779081 +0000 UTC m=+0.084073894 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 07:05:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:05:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:22.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:22 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:05:22 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:22.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:05:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:23 compute-1 sudo[201821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzladojjucjvwbpqeejwnlcyddaptzpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843122.982756-606-126447005524229/AnsiballZ_command.py'
Jan 31 07:05:23 compute-1 sudo[201821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:23 compute-1 python3.9[201823]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:05:23 compute-1 sudo[201821]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:23 compute-1 ceph-mon[81728]: pgmap v638: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:24 compute-1 sudo[201974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjsmbbfdafzpuvwzsooitudlldvzhcsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843123.8182418-636-228839051563333/AnsiballZ_stat.py'
Jan 31 07:05:24 compute-1 sudo[201974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:24 compute-1 python3.9[201976]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:05:24 compute-1 sudo[201974]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:24 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:05:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:24 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:24 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:24.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:24.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:24 compute-1 sudo[202126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebitdrfhrjhpdjuqqixhkgalkedrvxfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843124.7259867-663-119635992119105/AnsiballZ_stat.py'
Jan 31 07:05:24 compute-1 sudo[202126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:24 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:05:25 compute-1 python3.9[202128]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:05:25 compute-1 sudo[202126]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:25 compute-1 sudo[202249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwbxfuxmsyjbewsgnyzlvggmebahafcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843124.7259867-663-119635992119105/AnsiballZ_copy.py'
Jan 31 07:05:25 compute-1 sudo[202249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:25 compute-1 python3.9[202251]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843124.7259867-663-119635992119105/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:05:25 compute-1 sudo[202249]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:25 compute-1 ceph-mon[81728]: pgmap v639: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:26 compute-1 sudo[202401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygfcziddawkrhqllehnonomnkrndxhhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843125.9033437-708-9803440248947/AnsiballZ_command.py'
Jan 31 07:05:26 compute-1 sudo[202401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:26 compute-1 python3.9[202403]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:05:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:05:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:26 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:26.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:26 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:26.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:27 compute-1 sshd-session[202405]: Invalid user ubuntu from 2.57.122.238 port 55252
Jan 31 07:05:27 compute-1 sudo[202401]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:27 compute-1 sshd-session[202405]: Connection closed by invalid user ubuntu 2.57.122.238 port 55252 [preauth]
Jan 31 07:05:27 compute-1 sudo[202556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loazanowcvgzozoparxconkwevmujntz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843127.485122-732-538580102152/AnsiballZ_lineinfile.py'
Jan 31 07:05:27 compute-1 sudo[202556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:27 compute-1 python3.9[202558]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:05:27 compute-1 sudo[202556]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:28 compute-1 ceph-mon[81728]: pgmap v640: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:28 compute-1 sudo[202708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sawjawhwcnavzauwevyiizojhnsolfwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843128.1151567-756-132754204807996/AnsiballZ_replace.py'
Jan 31 07:05:28 compute-1 sudo[202708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:28 compute-1 python3.9[202710]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:05:28 compute-1 sudo[202708]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:05:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:28 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:28 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:28.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 31 07:05:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:28.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 31 07:05:29 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:29 compute-1 sudo[202860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxznwgefwuaepbzsyzmvuatvcxjztnlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843129.02037-780-198967040020331/AnsiballZ_replace.py'
Jan 31 07:05:29 compute-1 sudo[202860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:29 compute-1 python3.9[202862]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:05:29 compute-1 sudo[202860]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:29 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:29 compute-1 sudo[203012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lahpzezfovanspnshxeaddytiryyetkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843129.7025251-807-212708808598760/AnsiballZ_lineinfile.py'
Jan 31 07:05:29 compute-1 sudo[203012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:30 compute-1 ceph-mon[81728]: pgmap v641: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:30 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:05:30 compute-1 python3.9[203014]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:05:30 compute-1 sudo[203012]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:30 compute-1 ceph-mgr[82088]: client.0 ms_handle_reset on v2:192.168.122.100:6800/4113492602
Jan 31 07:05:30 compute-1 sudo[203164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdmmwizrsqjtzesuqzszbfjxoevpzyap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843130.2583654-807-78221246114750/AnsiballZ_lineinfile.py'
Jan 31 07:05:30 compute-1 sudo[203164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:30 compute-1 python3.9[203166]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:05:30 compute-1 sudo[203164]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:30.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:30.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:31 compute-1 sudo[203316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zniccfugujcaenuqcdlzyfloayrgvwli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843130.8179536-807-54985021678872/AnsiballZ_lineinfile.py'
Jan 31 07:05:31 compute-1 sudo[203316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:31 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:31 compute-1 python3.9[203318]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:05:31 compute-1 sudo[203316]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:31 compute-1 sudo[203479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhjpkdoolqroynmrybwktkoufjvhfpin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843131.3886573-807-267913864906914/AnsiballZ_lineinfile.py'
Jan 31 07:05:31 compute-1 sudo[203479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:31 compute-1 podman[203442]: 2026-01-31 07:05:31.660593287 +0000 UTC m=+0.080858687 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 07:05:31 compute-1 python3.9[203485]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:05:31 compute-1 sudo[203479]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:32 compute-1 ceph-mon[81728]: pgmap v642: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:32 compute-1 sudo[203646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhwbwpofinslfreejwktwhhfxjokjypu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843132.146764-894-79510709291124/AnsiballZ_stat.py'
Jan 31 07:05:32 compute-1 sudo[203646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:32 compute-1 python3.9[203648]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:05:32 compute-1 sudo[203646]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:32.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:32.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:33 compute-1 sudo[203800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbrdefiiruhsfdvicicvwutmnxelwwkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843133.086859-918-260600803379325/AnsiballZ_command.py'
Jan 31 07:05:33 compute-1 sudo[203800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:33 compute-1 python3.9[203802]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:05:33 compute-1 sudo[203800]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:34 compute-1 ceph-mon[81728]: pgmap v643: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:34 compute-1 sudo[203953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrrmpfwtdarrfncwrsgmvbtauhkaubjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843134.3201044-945-184312364352374/AnsiballZ_systemd_service.py'
Jan 31 07:05:34 compute-1 sudo[203953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:34 compute-1 python3.9[203955]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:05:34 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:34 compute-1 systemd[1]: Listening on multipathd control socket.
Jan 31 07:05:34 compute-1 sudo[203953]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:34.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:34.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:35 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:05:35 compute-1 sudo[204109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sphsthaqsbfinhphyomnzydgneiilvyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843135.1533394-969-215540455777481/AnsiballZ_systemd_service.py'
Jan 31 07:05:35 compute-1 sudo[204109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:35 compute-1 python3.9[204111]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:05:35 compute-1 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 31 07:05:35 compute-1 udevadm[204116]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 31 07:05:35 compute-1 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 31 07:05:35 compute-1 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 07:05:35 compute-1 multipathd[204119]: --------start up--------
Jan 31 07:05:35 compute-1 multipathd[204119]: read /etc/multipath.conf
Jan 31 07:05:35 compute-1 multipathd[204119]: path checkers start up
Jan 31 07:05:35 compute-1 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 07:05:35 compute-1 sudo[204109]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:36 compute-1 ceph-mon[81728]: pgmap v644: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:36 compute-1 sudo[204276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzifhjvznrpdnzlfwpvaogrmvhbbwmog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843136.4746108-1005-100658424925754/AnsiballZ_file.py'
Jan 31 07:05:36 compute-1 sudo[204276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:36.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:36.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:37 compute-1 python3.9[204278]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 07:05:37 compute-1 sudo[204276]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:37 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:37 compute-1 sudo[204428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwwkbnpyzubelkyikyavebnlyprymado ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843137.275604-1029-237098182543674/AnsiballZ_modprobe.py'
Jan 31 07:05:37 compute-1 sudo[204428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:37 compute-1 python3.9[204430]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 31 07:05:37 compute-1 kernel: Key type psk registered
Jan 31 07:05:37 compute-1 sudo[204428]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-1 sudo[204589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhspqhfwypiszbwsczgyhkxsqyrhtajf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843137.9647627-1053-186085312396051/AnsiballZ_stat.py'
Jan 31 07:05:38 compute-1 sudo[204589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:38 compute-1 ceph-mon[81728]: pgmap v645: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:38 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:38 compute-1 python3.9[204591]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:05:38 compute-1 sudo[204589]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-1 sudo[204712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzvgctvrtzzsiwuewimxkctkjfiiexxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843137.9647627-1053-186085312396051/AnsiballZ_copy.py'
Jan 31 07:05:38 compute-1 sudo[204712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:38 compute-1 python3.9[204714]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843137.9647627-1053-186085312396051/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:05:38 compute-1 sudo[204712]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:05:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:05:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:38.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:05:38 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:05:38 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:38.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:05:39 compute-1 sudo[204864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjhemiwgfpnnbaqtevdgkapfyaexdyhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843139.1408951-1101-261314198641619/AnsiballZ_lineinfile.py'
Jan 31 07:05:39 compute-1 sudo[204864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:39 compute-1 python3.9[204866]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:05:39 compute-1 sudo[204864]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:40 compute-1 sudo[205016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crrnbbyohebwdmiahkqwuiqqhxpmxnrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843139.8121443-1125-254058012377966/AnsiballZ_systemd.py'
Jan 31 07:05:40 compute-1 sudo[205016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:40 compute-1 ceph-mon[81728]: pgmap v646: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:40 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:05:40 compute-1 python3.9[205018]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:05:40 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 07:05:40 compute-1 systemd[1]: Stopped Load Kernel Modules.
Jan 31 07:05:40 compute-1 systemd[1]: Stopping Load Kernel Modules...
Jan 31 07:05:40 compute-1 systemd[1]: Starting Load Kernel Modules...
Jan 31 07:05:40 compute-1 systemd[1]: Finished Load Kernel Modules.
Jan 31 07:05:40 compute-1 sudo[205016]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:40.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:05:40 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:40 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:40.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:41 compute-1 sudo[205172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbxikkuzvuiahhrjqbuqtsyperhqgtdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843140.7373714-1149-247322900131345/AnsiballZ_dnf.py'
Jan 31 07:05:41 compute-1 sudo[205172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:41 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:41 compute-1 python3.9[205174]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:05:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:42 compute-1 ceph-mon[81728]: pgmap v647: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:05:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:05:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:42.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:05:42 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:05:42 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:42.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:05:43 compute-1 sudo[205179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:43 compute-1 sudo[205179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:43 compute-1 sudo[205179]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:43 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:43 compute-1 sudo[205204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:05:43 compute-1 sudo[205204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:43 compute-1 sudo[205204]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:43 compute-1 sudo[205229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:43 compute-1 sudo[205229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:43 compute-1 sudo[205229]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:43 compute-1 sudo[205254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:05:43 compute-1 sudo[205254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:43 compute-1 systemd[1]: Reloading.
Jan 31 07:05:43 compute-1 systemd-rc-local-generator[205318]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:05:43 compute-1 systemd-sysv-generator[205321]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:05:43 compute-1 sudo[205254]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:43 compute-1 systemd[1]: Reloading.
Jan 31 07:05:43 compute-1 systemd-sysv-generator[205377]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:05:43 compute-1 systemd-rc-local-generator[205373]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:05:44 compute-1 systemd-logind[788]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 07:05:44 compute-1 systemd-logind[788]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 07:05:44 compute-1 lvm[205419]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:05:44 compute-1 lvm[205419]: VG ceph_vg0 finished
Jan 31 07:05:44 compute-1 ceph-mon[81728]: pgmap v648: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:44 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:05:44 compute-1 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:05:44 compute-1 systemd[1]: Reloading.
Jan 31 07:05:44 compute-1 systemd-rc-local-generator[205466]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:05:44 compute-1 systemd-sysv-generator[205471]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:05:44 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:05:44 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49ea9f46f0 =====
Jan 31 07:05:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:44.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:44 compute-1 radosgw[83730]: ====== req done req=0x7f49ea9f46f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:05:44 compute-1 radosgw[83730]: beast: 0x7f49ea9f46f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:44.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:05:45 compute-1 sudo[205172]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:45 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:45 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:05:45 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:05:45 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:05:45 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:05:45 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:05:45 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:05:45 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:05:45 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:05:45 compute-1 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:05:45 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.171s CPU time.
Jan 31 07:05:45 compute-1 systemd[1]: run-rb4c5779ab4fb46859cb962a94a373a73.service: Deactivated successfully.
Jan 31 07:05:45 compute-1 sudo[206772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxfopkpjhkyevqyamevkipjefuxccemy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843145.5756402-1173-265612154627951/AnsiballZ_systemd_service.py'
Jan 31 07:05:45 compute-1 sudo[206772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:46 compute-1 python3.9[206774]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:05:46 compute-1 systemd[1]: Stopping Open-iSCSI...
Jan 31 07:05:46 compute-1 iscsid[200146]: iscsid shutting down.
Jan 31 07:05:46 compute-1 systemd[1]: iscsid.service: Deactivated successfully.
Jan 31 07:05:46 compute-1 systemd[1]: Stopped Open-iSCSI.
Jan 31 07:05:46 compute-1 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 07:05:46 compute-1 systemd[1]: Starting Open-iSCSI...
Jan 31 07:05:46 compute-1 systemd[1]: Started Open-iSCSI.
Jan 31 07:05:46 compute-1 sudo[206772]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:46 compute-1 ceph-mon[81728]: pgmap v649: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:46 compute-1 sudo[206928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liqmyypuukdndollasmdayejvzemhfsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843146.436759-1197-163709370543138/AnsiballZ_systemd_service.py'
Jan 31 07:05:46 compute-1 sudo[206928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:46 compute-1 python3.9[206930]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:05:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:46.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:46.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:46 compute-1 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 31 07:05:46 compute-1 multipathd[204119]: exit (signal)
Jan 31 07:05:46 compute-1 multipathd[204119]: --------shut down-------
Jan 31 07:05:46 compute-1 systemd[1]: multipathd.service: Deactivated successfully.
Jan 31 07:05:46 compute-1 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 31 07:05:46 compute-1 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 07:05:47 compute-1 multipathd[206936]: --------start up--------
Jan 31 07:05:47 compute-1 multipathd[206936]: read /etc/multipath.conf
Jan 31 07:05:47 compute-1 multipathd[206936]: path checkers start up
Jan 31 07:05:47 compute-1 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 07:05:47 compute-1 sudo[206928]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:47 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:47 compute-1 python3.9[207093]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:05:48 compute-1 ceph-mon[81728]: pgmap v650: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:48 compute-1 sudo[207247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndpxakydwwhrgplhiycdzdhndcpyfdos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843148.4376986-1249-83327891973870/AnsiballZ_file.py'
Jan 31 07:05:48 compute-1 sudo[207247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:48 compute-1 python3.9[207249]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:05:48 compute-1 sudo[207247]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:48.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:48.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:05:49.627987) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843149628191, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 995, "num_deletes": 251, "total_data_size": 1571403, "memory_usage": 1602072, "flush_reason": "Manual Compaction"}
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843149633191, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 674997, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18133, "largest_seqno": 19123, "table_properties": {"data_size": 671406, "index_size": 1179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10657, "raw_average_key_size": 20, "raw_value_size": 663193, "raw_average_value_size": 1290, "num_data_blocks": 52, "num_entries": 514, "num_filter_entries": 514, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843091, "oldest_key_time": 1769843091, "file_creation_time": 1769843149, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 5251 microseconds, and 2469 cpu microseconds.
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:05:49.633243) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 674997 bytes OK
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:05:49.633260) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:05:49.634666) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:05:49.634686) EVENT_LOG_v1 {"time_micros": 1769843149634680, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:05:49.634707) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1566371, prev total WAL file size 1566371, number of live WAL files 2.
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:05:49.635663) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(659KB)], [33(9635KB)]
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843149635724, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 10542166, "oldest_snapshot_seqno": -1}
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 5215 keys, 7015046 bytes, temperature: kUnknown
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843149713434, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 7015046, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6983251, "index_size": 17669, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13061, "raw_key_size": 134413, "raw_average_key_size": 25, "raw_value_size": 6891271, "raw_average_value_size": 1321, "num_data_blocks": 706, "num_entries": 5215, "num_filter_entries": 5215, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842193, "oldest_key_time": 0, "file_creation_time": 1769843149, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:05:49.713674) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7015046 bytes
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:05:49.715830) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.5 rd, 90.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.4 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(26.0) write-amplify(10.4) OK, records in: 5705, records dropped: 490 output_compression: NoCompression
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:05:49.715853) EVENT_LOG_v1 {"time_micros": 1769843149715843, "job": 18, "event": "compaction_finished", "compaction_time_micros": 77780, "compaction_time_cpu_micros": 16216, "output_level": 6, "num_output_files": 1, "total_output_size": 7015046, "num_input_records": 5705, "num_output_records": 5215, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843149716020, "job": 18, "event": "table_file_deletion", "file_number": 35}
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843149716771, "job": 18, "event": "table_file_deletion", "file_number": 33}
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:05:49.635561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:05:49.716903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:05:49.716909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:05:49.716917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:05:49.716918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:05:49 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:05:49.716920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:05:49 compute-1 sudo[207399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukgudsoyvscyvsxjgwhwxrnkodnxaqcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843149.5461273-1282-93030615078740/AnsiballZ_systemd_service.py'
Jan 31 07:05:49 compute-1 sudo[207399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:49 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:50 compute-1 sudo[207402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:50 compute-1 sudo[207402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:50 compute-1 sudo[207402]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:50 compute-1 sudo[207427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:05:50 compute-1 sudo[207427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:50 compute-1 python3.9[207401]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:05:50 compute-1 sudo[207427]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:50 compute-1 systemd[1]: Reloading.
Jan 31 07:05:50 compute-1 systemd-sysv-generator[207482]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:05:50 compute-1 systemd-rc-local-generator[207475]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:05:50 compute-1 sudo[207399]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:50 compute-1 ceph-mon[81728]: pgmap v651: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:50 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 888 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:05:50 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:05:50 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:05:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:50.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:50.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:50 compute-1 python3.9[207636]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:05:51 compute-1 network[207653]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:05:51 compute-1 network[207654]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:05:51 compute-1 network[207655]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:05:51 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:52 compute-1 ceph-mon[81728]: pgmap v652: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:52 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:52 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:52 compute-1 podman[207762]: 2026-01-31 07:05:52.915708805 +0000 UTC m=+0.050110696 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 07:05:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:05:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:52.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:05:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:52.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:53 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:54 compute-1 ceph-mon[81728]: pgmap v653: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:54 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:54.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:54.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:55 compute-1 sudo[207945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulbbaxhmndtjvqkiuffrghoxnjyzkneq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843154.8683715-1340-234140470334244/AnsiballZ_systemd_service.py'
Jan 31 07:05:55 compute-1 sudo[207945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:55 compute-1 python3.9[207947]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:05:55 compute-1 sudo[207945]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:55 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 894 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:05:55 compute-1 sudo[208098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gczwvqufjsuuvrgfhzvcofxrhyjiicbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843155.5374503-1340-30875627288563/AnsiballZ_systemd_service.py'
Jan 31 07:05:55 compute-1 sudo[208098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:56 compute-1 python3.9[208100]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:05:56 compute-1 sudo[208098]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:56 compute-1 sudo[208251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfqnzhwhtyrjxhsahflzgczkqoiibrwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843156.193255-1340-50499374972119/AnsiballZ_systemd_service.py'
Jan 31 07:05:56 compute-1 sudo[208251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:56 compute-1 python3.9[208253]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:05:56 compute-1 sudo[208251]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:56.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:05:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:56.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:05:57 compute-1 ceph-mon[81728]: pgmap v654: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:57 compute-1 sudo[208404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iefgaqntdbssamvdjuyekvjdxtlzmpfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843156.8860927-1340-280716278110490/AnsiballZ_systemd_service.py'
Jan 31 07:05:57 compute-1 sudo[208404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:57 compute-1 python3.9[208406]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:05:57 compute-1 sudo[208404]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:57 compute-1 sudo[208557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggmbbtuiklyiznrtsjhlfvxjttjxheie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843157.5809307-1340-119300966156818/AnsiballZ_systemd_service.py'
Jan 31 07:05:57 compute-1 sudo[208557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:58 compute-1 ceph-mon[81728]: pgmap v655: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:05:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:05:58 compute-1 python3.9[208559]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:05:58 compute-1 sudo[208557]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:58 compute-1 sudo[208710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjydooxrcexmnxenacfejwmkkshdhpxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843158.3447685-1340-251967623271489/AnsiballZ_systemd_service.py'
Jan 31 07:05:58 compute-1 sudo[208710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:58 compute-1 python3.9[208712]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:05:58 compute-1 sudo[208710]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:05:58.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:05:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:05:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:05:58.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:05:59 compute-1 sudo[208863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbqxhvziqbezaoyugzijggqknsmhfkdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843158.9966877-1340-153989708336026/AnsiballZ_systemd_service.py'
Jan 31 07:05:59 compute-1 sudo[208863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:59 compute-1 python3.9[208865]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:05:59 compute-1 sudo[208863]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:59 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:59 compute-1 sudo[209016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjvjlkahghrhcbarixwnmjpkupqkqjfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843159.6252737-1340-226901819283722/AnsiballZ_systemd_service.py'
Jan 31 07:05:59 compute-1 sudo[209016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:00 compute-1 python3.9[209018]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:06:00 compute-1 sudo[209016]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:00 compute-1 ceph-mon[81728]: pgmap v656: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:00 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 898 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:06:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:00.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:06:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:00.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:06:01 compute-1 sudo[209169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdnbxmgxdpqabhancealhvuslxbzduie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843161.3229141-1517-158152254742812/AnsiballZ_file.py'
Jan 31 07:06:01 compute-1 sudo[209169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:01 compute-1 python3.9[209171]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:06:01 compute-1 sudo[209169]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:02 compute-1 sudo[209334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvuitujqgomwvqqputvbrzrtwepbzpht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843161.8129923-1517-142729754941447/AnsiballZ_file.py'
Jan 31 07:06:02 compute-1 sudo[209334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:02 compute-1 podman[209295]: 2026-01-31 07:06:02.065147196 +0000 UTC m=+0.067661530 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 07:06:02 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:02 compute-1 ceph-mon[81728]: pgmap v657: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:02 compute-1 python3.9[209343]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:06:02 compute-1 sudo[209334]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:02 compute-1 sudo[209499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfcsmsbusmmrwuugchltwambgdvteicq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843162.4220734-1517-54828456228696/AnsiballZ_file.py'
Jan 31 07:06:02 compute-1 sudo[209499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:02 compute-1 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 31 07:06:02 compute-1 python3.9[209501]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:06:02 compute-1 sudo[209499]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:02.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:06:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:02.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:06:03 compute-1 sudo[209652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmqbtsqsrxrkluhbiqiymmmfxuxfvxzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843162.9812608-1517-21210862788180/AnsiballZ_file.py'
Jan 31 07:06:03 compute-1 sudo[209652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:03 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:03 compute-1 python3.9[209654]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:06:03 compute-1 sudo[209652]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:03 compute-1 sudo[209804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svkvdwkiwqjfxeoewoszcpuagshlqbis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843163.519386-1517-121218557527087/AnsiballZ_file.py'
Jan 31 07:06:03 compute-1 sudo[209804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:03 compute-1 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 07:06:03 compute-1 python3.9[209806]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:06:03 compute-1 sudo[209804]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:04 compute-1 sudo[209957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwexiqilzcyjkclddsmnlyhovdjzaors ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843164.0407581-1517-201844988128584/AnsiballZ_file.py'
Jan 31 07:06:04 compute-1 sudo[209957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:04 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:04 compute-1 ceph-mon[81728]: pgmap v658: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:04 compute-1 python3.9[209959]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:06:04 compute-1 sudo[209957]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:04 compute-1 sudo[210109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-timkzybpmxbnltvdrqzdxsrodmhekboy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843164.5746133-1517-113715670715171/AnsiballZ_file.py'
Jan 31 07:06:04 compute-1 sudo[210109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:04 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:04.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:04.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:04 compute-1 python3.9[210111]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:06:04 compute-1 sudo[210109]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:05 compute-1 sudo[210261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aahbqxohtoyipxvoshydqrhlrthwgtxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843165.105438-1517-106314641743040/AnsiballZ_file.py'
Jan 31 07:06:05 compute-1 sudo[210261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:05 compute-1 python3.9[210263]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:06:05 compute-1 sudo[210261]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:05 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:05 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 903 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:06:06 compute-1 sudo[210413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frifcwwnbgodiajefhhxngwjtsuzlagc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843165.987904-1687-20112802877348/AnsiballZ_file.py'
Jan 31 07:06:06 compute-1 sudo[210413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:06 compute-1 python3.9[210415]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:06:06 compute-1 sudo[210413]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:06 compute-1 ceph-mon[81728]: pgmap v659: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:06 compute-1 sudo[210565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dawhrubhibrloomnyefbgtkwlfktjwth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843166.5771422-1687-165568886232785/AnsiballZ_file.py'
Jan 31 07:06:06 compute-1 sudo[210565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:06.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:06:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:06.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:06:06 compute-1 python3.9[210567]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:06:07 compute-1 sudo[210565]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:07 compute-1 sudo[210717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upxzdutdeiwkpemszlalgkbwfleliqts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843167.1188247-1687-93918555308081/AnsiballZ_file.py'
Jan 31 07:06:07 compute-1 sudo[210717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:07 compute-1 python3.9[210719]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:06:07 compute-1 sudo[210717]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:07 compute-1 sudo[210869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjdeupcodbevqzuaxedzyjjswpkonwzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843167.644915-1687-22215855637660/AnsiballZ_file.py'
Jan 31 07:06:07 compute-1 sudo[210869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:07 compute-1 ceph-mon[81728]: pgmap v660: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:08 compute-1 python3.9[210871]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:06:08 compute-1 sudo[210869]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:08 compute-1 sudo[211021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyphaevvkomigbxvgpjfraepmzkmmyef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843168.1974583-1687-130474478059115/AnsiballZ_file.py'
Jan 31 07:06:08 compute-1 sudo[211021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:08 compute-1 python3.9[211023]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:06:08 compute-1 sudo[211021]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:08 compute-1 sudo[211173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhwybxtbvfbfbtrdomorhgfxryqaapsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843168.738326-1687-232143647050483/AnsiballZ_file.py'
Jan 31 07:06:08 compute-1 sudo[211173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:08.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:08.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:09 compute-1 python3.9[211175]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:06:09 compute-1 sudo[211173]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:09 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:09 compute-1 sudo[211325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suorioxmfcsuobhywyxfqprdpxjbmdro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843169.268899-1687-131990218267537/AnsiballZ_file.py'
Jan 31 07:06:09 compute-1 sudo[211325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:09 compute-1 python3.9[211327]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:06:09 compute-1 sudo[211325]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:09 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:10 compute-1 sudo[211477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yriswrvoxwifooxfxzwuulsgtmclxyks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843169.7987697-1687-102729512549345/AnsiballZ_file.py'
Jan 31 07:06:10 compute-1 sudo[211477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:10 compute-1 python3.9[211479]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:06:10 compute-1 sudo[211477]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:10 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:10 compute-1 ceph-mon[81728]: pgmap v661: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:10 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 909 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:06:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:10.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:10.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:11 compute-1 sudo[211629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjkqwzlugdmxelwgrcusbtfnfmcwdfpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843171.1454456-1861-211076766056709/AnsiballZ_command.py'
Jan 31 07:06:11 compute-1 sudo[211629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:11 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:11 compute-1 python3.9[211631]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:06:11 compute-1 sudo[211629]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:12 compute-1 python3.9[211783]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 07:06:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:12 compute-1 ceph-mon[81728]: pgmap v662: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:12 compute-1 sudo[211933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfzkaqkapuugbzaewqdgykxcxfrmlgvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843172.737702-1915-126830248462406/AnsiballZ_systemd_service.py'
Jan 31 07:06:12 compute-1 sudo[211933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 31 07:06:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:12.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 31 07:06:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:12.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:13 compute-1 python3.9[211935]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:06:13 compute-1 systemd[1]: Reloading.
Jan 31 07:06:13 compute-1 systemd-sysv-generator[211964]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:06:13 compute-1 systemd-rc-local-generator[211959]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:06:13 compute-1 sudo[211933]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:13 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 31 07:06:13 compute-1 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 31 07:06:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:14 compute-1 sudo[212122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcohxxtifhstltpdnuehlsvrrdmwoull ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843173.8364172-1939-240548104220537/AnsiballZ_command.py'
Jan 31 07:06:14 compute-1 sudo[212122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:14 compute-1 python3.9[212124]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:06:14 compute-1 sudo[212122]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:14 compute-1 ceph-mon[81728]: pgmap v663: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:14 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 914 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:06:14 compute-1 sudo[212275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gevqvbeanqfooiphpnozjdmfqmcjckke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843174.525737-1939-42013725711147/AnsiballZ_command.py'
Jan 31 07:06:14 compute-1 sudo[212275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:14 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:14 compute-1 python3.9[212277]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:06:14 compute-1 sudo[212275]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:14.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:14.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:15 compute-1 sudo[212428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxcrkvhqaejpymlehnphalmqbbafkgut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843175.0483341-1939-270074172044641/AnsiballZ_command.py'
Jan 31 07:06:15 compute-1 sudo[212428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:15 compute-1 python3.9[212430]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:06:15 compute-1 sudo[212428]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:15 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:15 compute-1 ceph-mon[81728]: pgmap v664: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:15 compute-1 sudo[212581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svwkbwmmmaforgfdviulwzkfnudskrms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843175.559233-1939-271131810017069/AnsiballZ_command.py'
Jan 31 07:06:15 compute-1 sudo[212581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:15 compute-1 python3.9[212583]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:06:16 compute-1 sudo[212581]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:16 compute-1 sudo[212734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqtoqipkjvikncugpgnvlgjipitbgowd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843176.1234658-1939-107772492512774/AnsiballZ_command.py'
Jan 31 07:06:16 compute-1 sudo[212734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:16 compute-1 python3.9[212736]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:06:16 compute-1 sudo[212734]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:16 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:16 compute-1 sudo[212887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znbrhahbaexyegftzxhjoyevczldiikc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843176.6797254-1939-140314799005961/AnsiballZ_command.py'
Jan 31 07:06:16 compute-1 sudo[212887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:16.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:06:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:16.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:06:17 compute-1 python3.9[212889]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:06:17 compute-1 sudo[212887]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:17 compute-1 sudo[213040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npytudzqstlhggheaxzpgapegiqtrgow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843177.1977081-1939-4336190441997/AnsiballZ_command.py'
Jan 31 07:06:17 compute-1 sudo[213040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:17 compute-1 python3.9[213042]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:06:17 compute-1 sudo[213040]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:17 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:17 compute-1 ceph-mon[81728]: pgmap v665: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:17 compute-1 sudo[213193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oealwpszvlphrvttotttvumsnhimmrkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843177.7599788-1939-7049680721415/AnsiballZ_command.py'
Jan 31 07:06:17 compute-1 sudo[213193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:18 compute-1 python3.9[213195]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:06:18 compute-1 sudo[213193]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:18 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:18.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:18.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:19 compute-1 sudo[213346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehtnagtczmuhpdtgrtxdekngtazrhqqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843179.5143209-2146-156271124018879/AnsiballZ_file.py'
Jan 31 07:06:19 compute-1 sudo[213346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:19 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:19 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:19 compute-1 ceph-mon[81728]: pgmap v666: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:19 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 919 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:06:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:06:19.881 140083 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:06:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:06:19.882 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:06:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:06:19.882 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:06:19 compute-1 python3.9[213348]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:06:19 compute-1 sudo[213346]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:20 compute-1 sudo[213498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqatauvumdjsjajanqeasuhlsjuusdum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843180.044426-2146-231273599606045/AnsiballZ_file.py'
Jan 31 07:06:20 compute-1 sudo[213498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:20 compute-1 python3.9[213500]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:06:20 compute-1 sudo[213498]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:20 compute-1 sudo[213650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrxckinzdkmetjcktaqggqggzcakdsiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843180.5733292-2146-39544246708165/AnsiballZ_file.py'
Jan 31 07:06:20 compute-1 sudo[213650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:20 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:20.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:20 compute-1 python3.9[213652]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:06:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:20.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:21 compute-1 sudo[213650]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:21 compute-1 sudo[213802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djwguoplcvcoinhhwamawzhpfjnoteaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843181.3837903-2212-223712942963234/AnsiballZ_file.py'
Jan 31 07:06:21 compute-1 sudo[213802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:21 compute-1 python3.9[213804]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:06:21 compute-1 sudo[213802]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:21 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:21 compute-1 ceph-mon[81728]: pgmap v667: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:22 compute-1 sudo[213954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrrjaouebzozhwcarrwfheodshfrxywc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843181.9628792-2212-214242370968369/AnsiballZ_file.py'
Jan 31 07:06:22 compute-1 sudo[213954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:22 compute-1 python3.9[213956]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:06:22 compute-1 sudo[213954]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:22 compute-1 sudo[214106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eisjvyfazktzokwqxdjaxbzbdcjzidub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843182.5464346-2212-247598168276159/AnsiballZ_file.py'
Jan 31 07:06:22 compute-1 sudo[214106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:22 compute-1 python3.9[214108]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:06:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:22 compute-1 sudo[214106]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:06:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:22.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:06:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:06:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:22.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:06:22 compute-1 podman[214109]: 2026-01-31 07:06:22.989882681 +0000 UTC m=+0.047925757 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 31 07:06:23 compute-1 sudo[214277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrlswepowldsabtlkowlakvjaibrgbtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843183.0641646-2212-79928037484707/AnsiballZ_file.py'
Jan 31 07:06:23 compute-1 sudo[214277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:23 compute-1 python3.9[214279]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:06:23 compute-1 sudo[214277]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:23 compute-1 sudo[214429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-palyofgnrjaqilucwrszqajdawlpfiso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843183.6697757-2212-18701884977165/AnsiballZ_file.py'
Jan 31 07:06:23 compute-1 sudo[214429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:23 compute-1 ceph-mon[81728]: pgmap v668: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:24 compute-1 python3.9[214431]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:06:24 compute-1 sudo[214429]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:24 compute-1 sudo[214581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtwifnteaetlkdwyaahmurimgeqaxtmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843184.1932037-2212-147177273393150/AnsiballZ_file.py'
Jan 31 07:06:24 compute-1 sudo[214581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:24 compute-1 python3.9[214583]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:06:24 compute-1 sudo[214581]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:24 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:24 compute-1 sudo[214733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvoaqytrwgitfjsbgjvvbohcavsyisbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843184.71513-2212-237937245186247/AnsiballZ_file.py'
Jan 31 07:06:24 compute-1 sudo[214733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:24 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 924 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:06:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:24.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:24.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:25 compute-1 python3.9[214735]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:06:25 compute-1 sudo[214733]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:25 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:25 compute-1 ceph-mon[81728]: pgmap v669: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:26.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:06:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:26.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:06:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:28 compute-1 ceph-mon[81728]: pgmap v670: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:28.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:28.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:29 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:29 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:30 compute-1 ceph-mon[81728]: pgmap v671: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:30 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 929 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:06:30 compute-1 sudo[214885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-encwdatvhvbdgakthibsuqhihuqmxfda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843190.4355803-2537-207988490382577/AnsiballZ_getent.py'
Jan 31 07:06:30 compute-1 sudo[214885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:30.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:30.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:30 compute-1 python3.9[214887]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 31 07:06:31 compute-1 sudo[214885]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:31 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:31 compute-1 sudo[215038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woxhczidofgmutgteqslztulmfsiyusv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843191.3642452-2562-91930675989771/AnsiballZ_group.py'
Jan 31 07:06:31 compute-1 sudo[215038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:31 compute-1 python3.9[215040]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 07:06:31 compute-1 groupadd[215041]: group added to /etc/group: name=nova, GID=42436
Jan 31 07:06:31 compute-1 groupadd[215041]: group added to /etc/gshadow: name=nova
Jan 31 07:06:31 compute-1 groupadd[215041]: new group: name=nova, GID=42436
Jan 31 07:06:31 compute-1 sudo[215038]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:32 compute-1 ceph-mon[81728]: pgmap v672: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:32 compute-1 sudo[215207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnvvktgwafycnqapbedepzkvngllormv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843192.1196322-2585-17193572217115/AnsiballZ_user.py'
Jan 31 07:06:32 compute-1 sudo[215207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:32 compute-1 podman[215170]: 2026-01-31 07:06:32.567157557 +0000 UTC m=+0.064635008 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:06:32 compute-1 python3.9[215217]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 07:06:32 compute-1 useradd[215227]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 31 07:06:32 compute-1 useradd[215227]: add 'nova' to group 'libvirt'
Jan 31 07:06:32 compute-1 useradd[215227]: add 'nova' to shadow group 'libvirt'
Jan 31 07:06:32 compute-1 sudo[215207]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:32.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:32.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:34 compute-1 sshd-session[215258]: Accepted publickey for zuul from 192.168.122.30 port 37746 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 07:06:34 compute-1 systemd-logind[788]: New session 50 of user zuul.
Jan 31 07:06:34 compute-1 systemd[1]: Started Session 50 of User zuul.
Jan 31 07:06:34 compute-1 sshd-session[215258]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:06:34 compute-1 ceph-mon[81728]: pgmap v673: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:34 compute-1 sshd-session[215261]: Received disconnect from 192.168.122.30 port 37746:11: disconnected by user
Jan 31 07:06:34 compute-1 sshd-session[215261]: Disconnected from user zuul 192.168.122.30 port 37746
Jan 31 07:06:34 compute-1 sshd-session[215258]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:06:34 compute-1 systemd[1]: session-50.scope: Deactivated successfully.
Jan 31 07:06:34 compute-1 systemd-logind[788]: Session 50 logged out. Waiting for processes to exit.
Jan 31 07:06:34 compute-1 systemd-logind[788]: Removed session 50.
Jan 31 07:06:34 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:06:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:34.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:06:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:35.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:35 compute-1 python3.9[215411]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:06:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:35 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 934 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:06:35 compute-1 python3.9[215532]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843194.6963613-2660-166889689237787/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:06:36 compute-1 python3.9[215682]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:06:36 compute-1 ceph-mon[81728]: pgmap v674: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:36 compute-1 python3.9[215758]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:06:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:06:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:36.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:06:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:37.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:37 compute-1 python3.9[215908]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:06:37 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:37 compute-1 python3.9[216029]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843196.6449845-2660-176101180409593/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:06:38 compute-1 python3.9[216179]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:06:38 compute-1 ceph-mon[81728]: pgmap v675: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:38 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:38 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:38 compute-1 python3.9[216300]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843197.6799037-2660-101802616018643/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:06:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:38.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:39.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:39 compute-1 python3.9[216450]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:06:39 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:39 compute-1 python3.9[216571]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843198.7033477-2660-56532320398519/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:06:39 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:40 compute-1 python3.9[216721]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:06:40 compute-1 ceph-mon[81728]: pgmap v676: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:40 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 939 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:06:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:40 compute-1 python3.9[216842]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843199.7448688-2660-203215206544316/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:06:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:40.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:41.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:41 compute-1 sudo[216992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fexacylykldlsohtxdehrayftmikfrqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843201.2735782-2909-28230039595590/AnsiballZ_file.py'
Jan 31 07:06:41 compute-1 sudo[216992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:41 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:41 compute-1 python3.9[216994]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:06:41 compute-1 sudo[216992]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:42 compute-1 sudo[217144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmaolxtvhqhjxxgkvunatkwdkbevjrlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843201.974902-2933-108571086069907/AnsiballZ_copy.py'
Jan 31 07:06:42 compute-1 sudo[217144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:42 compute-1 python3.9[217146]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:06:42 compute-1 sudo[217144]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:42 compute-1 ceph-mon[81728]: pgmap v677: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:42 compute-1 sudo[217296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaxlmxjdsjkbbjfiekvfnxborirvjzgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843202.6630828-2958-130775619939011/AnsiballZ_stat.py'
Jan 31 07:06:42 compute-1 sudo[217296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:42.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:43.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:43 compute-1 python3.9[217298]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:06:43 compute-1 sudo[217296]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:43 compute-1 sudo[217448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdfnlffuzxqogajwfimdgdnvyvynenxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843203.296635-2981-67870423134119/AnsiballZ_stat.py'
Jan 31 07:06:43 compute-1 sudo[217448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:43 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:43 compute-1 python3.9[217450]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:06:43 compute-1 sudo[217448]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:44 compute-1 sudo[217571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quvilntobzwiahkhggbyrnahapwqsrhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843203.296635-2981-67870423134119/AnsiballZ_copy.py'
Jan 31 07:06:44 compute-1 sudo[217571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:44 compute-1 python3.9[217573]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769843203.296635-2981-67870423134119/.source _original_basename=.zegwgl4o follow=False checksum=c8671954745c2123eb96533e1b8e5f899b56288e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 31 07:06:44 compute-1 sudo[217571]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:44 compute-1 ceph-mon[81728]: pgmap v678: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:44 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:06:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:45.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:06:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:45.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:45 compute-1 python3.9[217725]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:06:45 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 944 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:06:45 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:45 compute-1 python3.9[217877]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:06:46 compute-1 python3.9[217998]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843205.3301384-3059-131119817973004/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:06:46 compute-1 ceph-mon[81728]: pgmap v679: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:46 compute-1 python3.9[218148]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:06:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:06:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:47.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:06:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:47.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:47 compute-1 python3.9[218269]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843206.4196637-3104-134263794254474/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:06:47 compute-1 ceph-mon[81728]: pgmap v680: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:47 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:48 compute-1 sudo[218419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnhhggtzaolvnfrqpmbxrbndmeoxjhst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843207.8486536-3155-147140007189273/AnsiballZ_container_config_data.py'
Jan 31 07:06:48 compute-1 sudo[218419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:48 compute-1 python3.9[218421]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 31 07:06:48 compute-1 sudo[218419]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:48 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:49.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:49.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:49 compute-1 sudo[218571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhhizxpoxppoqrrhxfzgoqjitomnjksx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843208.889179-3188-108372506825499/AnsiballZ_container_config_hash.py'
Jan 31 07:06:49 compute-1 sudo[218571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:49 compute-1 python3.9[218573]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 07:06:49 compute-1 sudo[218571]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:49 compute-1 ceph-mon[81728]: pgmap v681: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:49 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 949 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:06:49 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:50 compute-1 sudo[218723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfwyurgjyyocwngnvrolksrcmbnflioq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769843209.9231868-3218-40075837784735/AnsiballZ_edpm_container_manage.py'
Jan 31 07:06:50 compute-1 sudo[218723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:50 compute-1 sudo[218726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:50 compute-1 sudo[218726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:50 compute-1 sudo[218726]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:50 compute-1 sudo[218751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:06:50 compute-1 sudo[218751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:50 compute-1 sudo[218751]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:50 compute-1 sudo[218776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:50 compute-1 sudo[218776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:50 compute-1 sudo[218776]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:50 compute-1 sudo[218801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:06:50 compute-1 sudo[218801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:50 compute-1 python3[218725]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 07:06:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:50 compute-1 sudo[218801]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:06:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:51.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:06:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:51.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:51 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:51 compute-1 ceph-mon[81728]: pgmap v682: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:51 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:51 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:06:51 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:06:51 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:06:51 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:06:51 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:52 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:53.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:53.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:54 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:54 compute-1 ceph-mon[81728]: pgmap v683: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:06:54 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:55.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:55.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:06:56 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 954 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:06:56 compute-1 podman[218907]: 2026-01-31 07:06:56.235485541 +0000 UTC m=+3.170499880 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:06:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:57.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:06:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:57.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:06:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:06:59.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:06:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:06:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:06:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:06:59.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:00 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:00 compute-1 ceph-mon[81728]: pgmap v684: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:07:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:01.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:07:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:01.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:03.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:07:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:03.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:07:03 compute-1 podman[218965]: 2026-01-31 07:07:03.160673754 +0000 UTC m=+0.086632613 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 07:07:03 compute-1 ceph-mon[81728]: pgmap v685: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:03 compute-1 ceph-mon[81728]: pgmap v686: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:03 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 959 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:07:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:05.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:05.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:05 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:06 compute-1 ceph-mon[81728]: pgmap v687: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:06 compute-1 ceph-mon[81728]: pgmap v688: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:07.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:07.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:07 compute-1 sudo[218992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:07 compute-1 sudo[218992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:07 compute-1 ceph-mon[81728]: pgmap v689: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:07 compute-1 ceph-mon[81728]: pgmap v690: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:07 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 969 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:07:07 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:07:07 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:07:07 compute-1 sudo[218992]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:07 compute-1 sudo[219017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:07:07 compute-1 sudo[219017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:07 compute-1 sudo[219017]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:08 compute-1 podman[218838]: 2026-01-31 07:07:08.067296015 +0000 UTC m=+17.338036046 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 07:07:08 compute-1 podman[219064]: 2026-01-31 07:07:08.178805309 +0000 UTC m=+0.044109503 container create a3c644e8d19767ebf8de2ab7478ab0cbdf949c047cff8512f994ff77edec337f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 31 07:07:08 compute-1 podman[219064]: 2026-01-31 07:07:08.155027697 +0000 UTC m=+0.020331921 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 07:07:08 compute-1 python3[218725]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 31 07:07:08 compute-1 sudo[218723]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:08 compute-1 sudo[219250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwefcqaplsxukgolslyxunzxwwkibovw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843228.4196737-3242-176647683567121/AnsiballZ_stat.py'
Jan 31 07:07:08 compute-1 sudo[219250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:08 compute-1 python3.9[219252]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:07:08 compute-1 sudo[219250]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:08 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:09.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:07:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:09.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:07:09 compute-1 sudo[219404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzaqmbvdvhwpsdwhrepexrtoqlcshsvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843229.6006157-3278-267276788006217/AnsiballZ_container_config_data.py'
Jan 31 07:07:09 compute-1 sudo[219404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:09 compute-1 ceph-mon[81728]: pgmap v691: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:09 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:09 compute-1 python3.9[219406]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 31 07:07:10 compute-1 sudo[219404]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:10 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:10 compute-1 sudo[219556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkdlapmuujsslaadqjcjsfxidyfgkifl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843230.5066326-3311-248241164714099/AnsiballZ_container_config_hash.py'
Jan 31 07:07:10 compute-1 sudo[219556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:10 compute-1 python3.9[219558]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 07:07:10 compute-1 sudo[219556]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:07:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:11.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:07:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:11 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:11.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:11 compute-1 sudo[219708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvqhmyrjskepbyhpmodrctuecxqjynlf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769843231.4748979-3341-63705466566591/AnsiballZ_edpm_container_manage.py'
Jan 31 07:07:11 compute-1 sudo[219708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:11 compute-1 python3[219710]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 07:07:12 compute-1 podman[219747]: 2026-01-31 07:07:12.065454699 +0000 UTC m=+0.032768247 container create ca5eceb991c5d9598f0b85edfca8d2741f2a52148b515d6ca016fd356866dc79 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, config_id=edpm)
Jan 31 07:07:12 compute-1 podman[219747]: 2026-01-31 07:07:12.047569565 +0000 UTC m=+0.014883134 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 07:07:12 compute-1 python3[219710]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 31 07:07:12 compute-1 sudo[219708]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:12 compute-1 ceph-mon[81728]: pgmap v692: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:13.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:13.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:13 compute-1 sudo[219934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnpszdmfljjxqcpgxcxhulkevkbiwnxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843233.182406-3365-44329715386112/AnsiballZ_stat.py'
Jan 31 07:07:13 compute-1 sudo[219934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:13 compute-1 python3.9[219936]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:07:13 compute-1 sudo[219934]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:13 compute-1 ceph-mon[81728]: pgmap v693: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:14 compute-1 sudo[220088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyddnnrdxcrvnoisoupseudjluxlxoud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843233.9151273-3392-154610124671750/AnsiballZ_file.py'
Jan 31 07:07:14 compute-1 sudo[220088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:14 compute-1 python3.9[220090]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:07:14 compute-1 sudo[220088]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:14 compute-1 sudo[220239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyzlyoayxwnbidtkhpxvhraekxdjizap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843234.409072-3392-86208350762938/AnsiballZ_copy.py'
Jan 31 07:07:14 compute-1 sudo[220239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:14 compute-1 python3.9[220241]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843234.409072-3392-86208350762938/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:07:14 compute-1 sudo[220239]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:15.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:15.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:15 compute-1 sudo[220315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwpymzxzrewqkvsewjehlshsxhzsofqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843234.409072-3392-86208350762938/AnsiballZ_systemd.py'
Jan 31 07:07:15 compute-1 sudo[220315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:15 compute-1 python3.9[220317]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:07:15 compute-1 systemd[1]: Reloading.
Jan 31 07:07:15 compute-1 systemd-rc-local-generator[220341]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:07:15 compute-1 systemd-sysv-generator[220346]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:07:15 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:15 compute-1 sudo[220315]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:15 compute-1 sudo[220426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsxrhswmcxtjlbtpsorzcobnhhwdiitq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843234.409072-3392-86208350762938/AnsiballZ_systemd.py'
Jan 31 07:07:15 compute-1 sudo[220426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:16 compute-1 ceph-mon[81728]: pgmap v694: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:16 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:16 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 979 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:07:16 compute-1 python3.9[220428]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:07:16 compute-1 systemd[1]: Reloading.
Jan 31 07:07:16 compute-1 systemd-rc-local-generator[220459]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:07:16 compute-1 systemd-sysv-generator[220463]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:07:16 compute-1 systemd[1]: Starting nova_compute container...
Jan 31 07:07:16 compute-1 systemd[1]: Started libcrun container.
Jan 31 07:07:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e7854a349df4d24d0242c037a7f72a073c68d1c2e9d4169f33792f8dca29849/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e7854a349df4d24d0242c037a7f72a073c68d1c2e9d4169f33792f8dca29849/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e7854a349df4d24d0242c037a7f72a073c68d1c2e9d4169f33792f8dca29849/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e7854a349df4d24d0242c037a7f72a073c68d1c2e9d4169f33792f8dca29849/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e7854a349df4d24d0242c037a7f72a073c68d1c2e9d4169f33792f8dca29849/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:16 compute-1 podman[220469]: 2026-01-31 07:07:16.583176409 +0000 UTC m=+0.083617012 container init ca5eceb991c5d9598f0b85edfca8d2741f2a52148b515d6ca016fd356866dc79 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 07:07:16 compute-1 podman[220469]: 2026-01-31 07:07:16.587908937 +0000 UTC m=+0.088349520 container start ca5eceb991c5d9598f0b85edfca8d2741f2a52148b515d6ca016fd356866dc79 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible)
Jan 31 07:07:16 compute-1 podman[220469]: nova_compute
Jan 31 07:07:16 compute-1 nova_compute[220484]: + sudo -E kolla_set_configs
Jan 31 07:07:16 compute-1 systemd[1]: Started nova_compute container.
Jan 31 07:07:16 compute-1 sudo[220426]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Validating config file
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Copying service configuration files
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Deleting /etc/ceph
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Creating directory /etc/ceph
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Writing out command to execute
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:07:16 compute-1 nova_compute[220484]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 07:07:16 compute-1 nova_compute[220484]: ++ cat /run_command
Jan 31 07:07:16 compute-1 nova_compute[220484]: + CMD=nova-compute
Jan 31 07:07:16 compute-1 nova_compute[220484]: + ARGS=
Jan 31 07:07:16 compute-1 nova_compute[220484]: + sudo kolla_copy_cacerts
Jan 31 07:07:16 compute-1 nova_compute[220484]: Running command: 'nova-compute'
Jan 31 07:07:16 compute-1 nova_compute[220484]: + [[ ! -n '' ]]
Jan 31 07:07:16 compute-1 nova_compute[220484]: + . kolla_extend_start
Jan 31 07:07:16 compute-1 nova_compute[220484]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 07:07:16 compute-1 nova_compute[220484]: + umask 0022
Jan 31 07:07:16 compute-1 nova_compute[220484]: + exec nova-compute
Jan 31 07:07:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:17.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:17.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:17 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:18 compute-1 python3.9[220646]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:07:18 compute-1 ceph-mon[81728]: pgmap v695: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:18 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:18 compute-1 nova_compute[220484]: 2026-01-31 07:07:18.567 220488 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 07:07:18 compute-1 nova_compute[220484]: 2026-01-31 07:07:18.567 220488 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 07:07:18 compute-1 nova_compute[220484]: 2026-01-31 07:07:18.567 220488 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 07:07:18 compute-1 nova_compute[220484]: 2026-01-31 07:07:18.568 220488 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 31 07:07:18 compute-1 nova_compute[220484]: 2026-01-31 07:07:18.738 220488 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:07:18 compute-1 nova_compute[220484]: 2026-01-31 07:07:18.748 220488 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:07:18 compute-1 nova_compute[220484]: 2026-01-31 07:07:18.749 220488 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 07:07:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:19.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:19.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:19 compute-1 python3.9[220800]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:07:19 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:07:19.882 140083 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:07:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:07:19.883 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:07:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:07:19.883 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:07:20 compute-1 python3.9[220950]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:07:20 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:20 compute-1 ceph-mon[81728]: pgmap v696: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:20 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:21.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:21.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:21 compute-1 nova_compute[220484]: 2026-01-31 07:07:21.558 220488 INFO nova.virt.driver [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 31 07:07:21 compute-1 sudo[221100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezgmziwyoqweuepjapqjorqjecgbhlas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843241.063123-3572-206179735405554/AnsiballZ_podman_container.py'
Jan 31 07:07:21 compute-1 sudo[221100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:21 compute-1 nova_compute[220484]: 2026-01-31 07:07:21.665 220488 INFO nova.compute.provider_config [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 31 07:07:21 compute-1 python3.9[221102]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 07:07:21 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:21 compute-1 ceph-mon[81728]: pgmap v697: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:21 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:07:21 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 984 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:07:21 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:07:21 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:21 compute-1 sudo[221100]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.389 220488 DEBUG oslo_concurrency.lockutils [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.390 220488 DEBUG oslo_concurrency.lockutils [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.391 220488 DEBUG oslo_concurrency.lockutils [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.391 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.391 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.391 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.392 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.392 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.392 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.392 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.392 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.392 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.392 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.393 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.393 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.393 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.393 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.393 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.393 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.394 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.394 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.394 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.394 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.394 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.394 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.394 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.395 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.395 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.395 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.395 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.395 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.395 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.396 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.396 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.396 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.396 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.396 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.396 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.396 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.397 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.397 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.397 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.397 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.397 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.398 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.398 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.398 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.398 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.398 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.398 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.399 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.399 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.399 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.399 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.399 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.399 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.399 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.400 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.400 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.400 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.400 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.400 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.401 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.401 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.401 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.401 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.401 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.402 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.402 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.402 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.402 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.402 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.403 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.403 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.403 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.403 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.403 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.404 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.404 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.404 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.404 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.404 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.405 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.405 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.405 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.405 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.405 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.406 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.406 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.406 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.406 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.406 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.407 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.407 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.407 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.407 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.407 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.408 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.408 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.408 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.408 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.408 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.409 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.409 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.409 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.409 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.409 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.410 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.410 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.410 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.410 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.410 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.411 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.411 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.411 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.411 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.411 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.412 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.412 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.412 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.412 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.412 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.412 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.413 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.413 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.413 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.413 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.413 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.414 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.414 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.414 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.414 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.414 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.415 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.415 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.415 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.415 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.415 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.416 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.416 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.416 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.416 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.417 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.417 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.417 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.417 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.417 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.417 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.418 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.418 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.418 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.418 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.419 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.419 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.419 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.419 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.419 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.420 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.420 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.420 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.420 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.420 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.421 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.421 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.421 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.421 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.421 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.422 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.422 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.422 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.422 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.422 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.423 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.423 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.423 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.423 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.424 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.424 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.424 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.424 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.424 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.425 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.425 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.425 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.425 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.425 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.426 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.426 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.426 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.426 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.427 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.427 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.427 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.427 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.427 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.427 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.428 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.428 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.428 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.428 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.428 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.429 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.429 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.429 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.429 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.430 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.430 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.430 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.430 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.431 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.431 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.431 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.431 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.431 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.432 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.432 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.432 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.432 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.432 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.433 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.433 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.433 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.433 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.433 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.434 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.434 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.434 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.434 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.434 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.435 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.435 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.435 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.435 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.435 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.436 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.436 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.436 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.436 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.436 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.437 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.437 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.437 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.437 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.437 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.438 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.438 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.438 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.438 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.438 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.439 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.439 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.439 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.439 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.439 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.440 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.440 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.440 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.440 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.440 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.441 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.441 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.441 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.441 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.441 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.442 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.442 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.442 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.442 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.442 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.443 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.443 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.443 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.443 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.443 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.444 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.444 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.444 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.444 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.444 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.445 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.445 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.445 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.445 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.445 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.446 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.446 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.446 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.446 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.446 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.447 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.447 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.447 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.447 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.447 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.448 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.448 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.448 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.448 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.448 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.449 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.449 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.449 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.449 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.449 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.450 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.450 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.450 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.450 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.451 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.451 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.451 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.451 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.451 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.452 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.452 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.452 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.452 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.452 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.453 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.453 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.453 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.453 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.453 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.454 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.454 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.454 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.454 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.455 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.455 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.455 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.455 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.455 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.456 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.456 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.456 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.456 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.456 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.457 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.457 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.457 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.457 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.457 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.458 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.458 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.458 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.458 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.458 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.459 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.459 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.459 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.459 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.459 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.460 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.460 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.460 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.460 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.460 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.461 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.461 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.461 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.461 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.462 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.462 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.462 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.462 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.462 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.463 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.463 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.463 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.463 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.463 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.464 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.464 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.464 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.464 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.464 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.465 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.465 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.465 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.465 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.465 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.466 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.466 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.466 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.466 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.466 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.467 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.467 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.467 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.467 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.467 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.468 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.468 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.468 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.468 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.468 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.469 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.469 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.469 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.469 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.469 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.470 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.470 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.470 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.470 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.471 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.471 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.471 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.471 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.471 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.472 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.472 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.472 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.472 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.472 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.473 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.473 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.473 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.473 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.473 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.474 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.474 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.474 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.474 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.474 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.475 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.475 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.475 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.475 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.475 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.476 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.476 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.476 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.476 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.476 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.477 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.477 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.477 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.477 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.477 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.478 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.478 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.478 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.478 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.478 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.479 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.479 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.479 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.479 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.479 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.480 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.480 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.480 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.480 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.480 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.481 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.481 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.481 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.481 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.481 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.482 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.482 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.482 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.482 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.482 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.483 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.483 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.483 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.483 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.483 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.484 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.484 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.484 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.484 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.484 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.485 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.485 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.485 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.485 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.486 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.486 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.486 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.486 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.486 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.487 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.487 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.487 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.487 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.487 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.488 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.488 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.488 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.488 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.488 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.489 220488 WARNING oslo_config.cfg [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 31 07:07:22 compute-1 nova_compute[220484]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 31 07:07:22 compute-1 nova_compute[220484]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 31 07:07:22 compute-1 nova_compute[220484]: and ``live_migration_inbound_addr`` respectively.
Jan 31 07:07:22 compute-1 nova_compute[220484]: ).  Its value may be silently ignored in the future.
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.489 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.489 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.489 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.490 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.490 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.490 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.490 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.490 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.491 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.491 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.491 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.491 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.491 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.492 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.492 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.492 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.492 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.492 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.494 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.rbd_secret_uuid        = ef73c6e0-6d85-55c2-9347-1f544d3e3d3a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.494 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.494 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.494 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.495 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.495 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.495 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.495 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.495 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.495 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.496 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.496 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.496 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.496 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.497 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.497 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.497 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.497 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.497 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.498 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.498 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.498 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.498 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.498 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.498 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.498 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.499 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.499 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.499 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.499 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.499 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.500 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.500 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.500 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.500 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.500 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.501 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.501 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.501 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.501 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.501 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.501 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.502 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.502 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.502 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.502 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.502 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.502 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.503 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.503 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.503 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.503 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.503 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.503 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.504 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.504 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.504 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.504 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.504 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.504 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.505 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.505 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.505 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.505 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.505 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.505 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.506 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.506 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.506 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.506 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.506 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.506 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.507 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.507 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.507 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.507 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.507 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.508 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.508 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.508 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.508 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.508 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.508 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.509 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.509 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.509 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.509 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.509 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.509 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.510 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.510 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.510 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.510 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.511 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.511 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.511 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.511 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.511 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.512 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.512 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.512 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.512 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.513 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.513 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.513 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.513 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.514 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.514 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.514 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.514 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.515 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.515 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.515 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.515 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.516 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.516 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.516 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.516 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.516 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.517 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.517 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.517 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.518 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.518 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.518 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.518 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.518 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.519 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.519 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.519 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.519 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.519 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.519 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.520 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.520 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.520 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.520 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.520 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.520 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.521 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.521 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.521 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.521 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.521 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.522 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.522 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.522 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.522 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.522 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.522 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.522 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.523 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.523 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.523 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.523 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.523 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.523 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.523 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.524 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.524 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.524 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.524 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.524 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.525 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.525 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.525 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.525 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.525 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.525 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.526 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.526 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.526 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.526 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.526 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.527 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.527 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.527 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.527 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.527 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.527 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.528 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.528 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.528 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.528 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.528 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.528 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.529 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.529 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.529 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.529 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.529 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.530 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.530 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.530 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.530 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.530 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.530 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.531 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.531 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.531 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.531 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.531 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.531 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.532 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.532 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.532 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.532 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.532 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.532 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.533 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.533 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.533 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.533 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.533 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.533 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.534 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.534 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.534 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.534 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.534 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.534 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.534 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.535 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.535 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.535 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.535 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.535 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.535 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.535 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.536 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.536 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.536 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.536 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.536 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.537 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.537 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.537 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.537 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.537 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.538 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.538 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.538 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.538 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.538 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.538 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.538 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.539 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.539 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.539 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.539 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.539 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.540 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.540 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.540 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.540 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.541 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.541 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.541 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.541 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.541 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.541 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.541 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.542 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.542 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.542 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.542 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.542 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.543 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.543 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.543 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.543 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.543 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.543 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.544 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.544 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.544 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.544 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.544 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.544 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.545 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.545 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.545 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.545 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.545 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.545 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.546 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.546 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.546 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.546 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.546 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.546 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.547 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.547 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.547 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.547 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.547 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.547 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.547 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.548 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.548 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.548 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.548 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.548 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.548 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.548 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.549 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.549 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.549 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.549 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.549 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.550 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.550 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.550 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.550 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.550 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.550 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.550 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.551 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.551 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.551 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.551 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.551 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.551 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.552 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.552 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.552 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.552 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.552 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.552 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.552 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.553 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.553 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.553 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.553 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.553 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.553 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.554 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.554 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.554 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.554 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.554 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.554 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.554 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.555 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.555 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.555 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.555 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.555 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.555 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.556 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.556 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.556 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.556 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.556 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.556 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.557 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.557 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.557 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.557 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.557 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.557 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.558 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.558 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.558 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.558 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.558 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.558 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.559 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.559 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.559 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.559 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.559 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.559 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.559 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.560 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.560 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.560 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.560 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.560 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.560 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.561 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.561 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.561 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.561 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.561 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.561 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.561 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.562 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.562 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.562 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.562 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.562 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.563 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.563 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.563 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.563 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.563 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.563 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.564 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.564 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.564 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.564 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.564 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.565 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.565 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.565 220488 DEBUG oslo_service.service [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 07:07:22 compute-1 nova_compute[220484]: 2026-01-31 07:07:22.566 220488 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Jan 31 07:07:22 compute-1 sudo[221276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcvjvoboopvweykvaawkgyjtrsdilhfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843242.3666694-3597-35077460724744/AnsiballZ_systemd.py'
Jan 31 07:07:22 compute-1 sudo[221276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:22 compute-1 python3.9[221278]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:07:22 compute-1 systemd[1]: Stopping nova_compute container...
Jan 31 07:07:23 compute-1 nova_compute[220484]: 2026-01-31 07:07:23.014 220488 DEBUG oslo_concurrency.lockutils [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:07:23 compute-1 nova_compute[220484]: 2026-01-31 07:07:23.015 220488 DEBUG oslo_concurrency.lockutils [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:07:23 compute-1 nova_compute[220484]: 2026-01-31 07:07:23.016 220488 DEBUG oslo_concurrency.lockutils [None req-af486716-8feb-4e9f-98ed-e923a5a6f151 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:07:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:07:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:23.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:07:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:23.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:23 compute-1 systemd[1]: libpod-ca5eceb991c5d9598f0b85edfca8d2741f2a52148b515d6ca016fd356866dc79.scope: Deactivated successfully.
Jan 31 07:07:23 compute-1 systemd[1]: libpod-ca5eceb991c5d9598f0b85edfca8d2741f2a52148b515d6ca016fd356866dc79.scope: Consumed 3.008s CPU time.
Jan 31 07:07:23 compute-1 conmon[220484]: conmon ca5eceb991c5d9598f0b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca5eceb991c5d9598f0b85edfca8d2741f2a52148b515d6ca016fd356866dc79.scope/container/memory.events
Jan 31 07:07:23 compute-1 podman[221282]: 2026-01-31 07:07:23.517164457 +0000 UTC m=+0.538847938 container died ca5eceb991c5d9598f0b85edfca8d2741f2a52148b515d6ca016fd356866dc79 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:07:23 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ca5eceb991c5d9598f0b85edfca8d2741f2a52148b515d6ca016fd356866dc79-userdata-shm.mount: Deactivated successfully.
Jan 31 07:07:23 compute-1 systemd[1]: var-lib-containers-storage-overlay-6e7854a349df4d24d0242c037a7f72a073c68d1c2e9d4169f33792f8dca29849-merged.mount: Deactivated successfully.
Jan 31 07:07:23 compute-1 podman[221282]: 2026-01-31 07:07:23.681815708 +0000 UTC m=+0.703499189 container cleanup ca5eceb991c5d9598f0b85edfca8d2741f2a52148b515d6ca016fd356866dc79 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 07:07:23 compute-1 podman[221282]: nova_compute
Jan 31 07:07:23 compute-1 podman[221312]: nova_compute
Jan 31 07:07:23 compute-1 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 31 07:07:23 compute-1 systemd[1]: Stopped nova_compute container.
Jan 31 07:07:23 compute-1 systemd[1]: Starting nova_compute container...
Jan 31 07:07:23 compute-1 systemd[1]: Started libcrun container.
Jan 31 07:07:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e7854a349df4d24d0242c037a7f72a073c68d1c2e9d4169f33792f8dca29849/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e7854a349df4d24d0242c037a7f72a073c68d1c2e9d4169f33792f8dca29849/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e7854a349df4d24d0242c037a7f72a073c68d1c2e9d4169f33792f8dca29849/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e7854a349df4d24d0242c037a7f72a073c68d1c2e9d4169f33792f8dca29849/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e7854a349df4d24d0242c037a7f72a073c68d1c2e9d4169f33792f8dca29849/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:23 compute-1 podman[221323]: 2026-01-31 07:07:23.837221439 +0000 UTC m=+0.088628107 container init ca5eceb991c5d9598f0b85edfca8d2741f2a52148b515d6ca016fd356866dc79 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm)
Jan 31 07:07:23 compute-1 podman[221323]: 2026-01-31 07:07:23.842562104 +0000 UTC m=+0.093968752 container start ca5eceb991c5d9598f0b85edfca8d2741f2a52148b515d6ca016fd356866dc79 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 07:07:23 compute-1 nova_compute[221338]: + sudo -E kolla_set_configs
Jan 31 07:07:23 compute-1 podman[221323]: nova_compute
Jan 31 07:07:23 compute-1 systemd[1]: Started nova_compute container.
Jan 31 07:07:23 compute-1 sudo[221276]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Validating config file
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Copying service configuration files
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Deleting /etc/ceph
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Creating directory /etc/ceph
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Writing out command to execute
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:07:23 compute-1 nova_compute[221338]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 07:07:23 compute-1 nova_compute[221338]: ++ cat /run_command
Jan 31 07:07:23 compute-1 nova_compute[221338]: + CMD=nova-compute
Jan 31 07:07:23 compute-1 nova_compute[221338]: + ARGS=
Jan 31 07:07:23 compute-1 nova_compute[221338]: + sudo kolla_copy_cacerts
Jan 31 07:07:23 compute-1 nova_compute[221338]: + [[ ! -n '' ]]
Jan 31 07:07:23 compute-1 nova_compute[221338]: + . kolla_extend_start
Jan 31 07:07:23 compute-1 nova_compute[221338]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 07:07:23 compute-1 nova_compute[221338]: Running command: 'nova-compute'
Jan 31 07:07:23 compute-1 nova_compute[221338]: + umask 0022
Jan 31 07:07:23 compute-1 nova_compute[221338]: + exec nova-compute
Jan 31 07:07:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:25.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:25.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:25 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:25 compute-1 nova_compute[221338]: 2026-01-31 07:07:25.804 221342 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 07:07:25 compute-1 nova_compute[221338]: 2026-01-31 07:07:25.804 221342 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 07:07:25 compute-1 nova_compute[221338]: 2026-01-31 07:07:25.804 221342 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 07:07:25 compute-1 nova_compute[221338]: 2026-01-31 07:07:25.805 221342 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 31 07:07:25 compute-1 nova_compute[221338]: 2026-01-31 07:07:25.955 221342 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:07:25 compute-1 nova_compute[221338]: 2026-01-31 07:07:25.964 221342 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:07:25 compute-1 nova_compute[221338]: 2026-01-31 07:07:25.964 221342 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 07:07:26 compute-1 nova_compute[221338]: 2026-01-31 07:07:26.968 221342 INFO nova.virt.driver [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 31 07:07:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:07:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:27.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.064 221342 INFO nova.compute.provider_config [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 31 07:07:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:27.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.403 221342 DEBUG oslo_concurrency.lockutils [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.403 221342 DEBUG oslo_concurrency.lockutils [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.404 221342 DEBUG oslo_concurrency.lockutils [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.404 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.404 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.404 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.405 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.405 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.405 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.405 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.405 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.406 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.406 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.406 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.406 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.406 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.406 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.407 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.407 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.407 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.407 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.407 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.408 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.408 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.408 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.408 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.409 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.409 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.409 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.409 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.409 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.410 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.410 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.410 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.410 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.410 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.411 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.411 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.411 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.411 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.411 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.412 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.412 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.412 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.412 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.412 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.413 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.413 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.413 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.413 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.413 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.414 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.414 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.414 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.414 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.415 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.415 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.415 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.415 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.415 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.415 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.416 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.416 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.416 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.416 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.416 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.416 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.417 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.417 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.417 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.417 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.417 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.418 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.418 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.418 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.418 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.418 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.419 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.419 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.419 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.419 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.419 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.420 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.420 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.420 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.420 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.420 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.421 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.421 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.421 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.421 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.421 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.422 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.422 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.422 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.422 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.422 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.422 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.423 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.423 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.423 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.423 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.423 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.424 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.424 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.424 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.424 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.424 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.424 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.425 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.425 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.425 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.425 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.425 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.425 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.426 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.426 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.426 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.426 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.426 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.426 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.427 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.427 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.427 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.427 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.427 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.428 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.428 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.428 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.428 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.428 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.428 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.429 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.429 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.429 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.429 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.429 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.429 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.430 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.430 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.430 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.430 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.430 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.430 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.431 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.431 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.431 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.431 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.431 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.432 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.432 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.432 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.432 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.433 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.433 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.433 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.433 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.433 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.434 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.434 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.434 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.434 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.434 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.435 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.435 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.435 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.435 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.436 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.436 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.436 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.436 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.436 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.437 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.437 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.437 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.437 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.437 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.438 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.438 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.438 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.438 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.438 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.439 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.439 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.439 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.439 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.440 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.440 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.440 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.440 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.440 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.441 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.441 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.441 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.441 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.441 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.442 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.442 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.442 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.442 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.442 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.443 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.443 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.443 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.443 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.443 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.443 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.444 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.444 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.444 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.444 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.444 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.445 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.445 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.445 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.445 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.445 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.445 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.446 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.446 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.446 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.446 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.446 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.447 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.447 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.447 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.447 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.447 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.447 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.448 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.448 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.448 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.448 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.448 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.449 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.449 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.449 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.449 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.449 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.449 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.450 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.450 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.450 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.450 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.450 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.451 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.451 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.451 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.451 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.451 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.452 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.452 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.452 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.452 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.453 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.453 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.453 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.453 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.453 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.453 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.454 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.454 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.454 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.454 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.455 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.455 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.455 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.455 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.455 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.455 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.456 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.456 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.456 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.456 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.456 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.457 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.457 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.457 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.457 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.457 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.457 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.458 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.458 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.458 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.458 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.458 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.459 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.459 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.459 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.459 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.459 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.460 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.460 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.460 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.460 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.460 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.460 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.461 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.461 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.461 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.461 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.462 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.462 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.462 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.462 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.462 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.462 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.463 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.463 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.463 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.463 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.463 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.464 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.464 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.464 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.464 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.464 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.465 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.465 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.465 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.465 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.465 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.466 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.466 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.466 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.466 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.466 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.467 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.467 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.467 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.467 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.467 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.468 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.468 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.468 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.468 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.468 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.469 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.469 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.469 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.469 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.469 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.470 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.470 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.470 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.470 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.470 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.471 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.471 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.471 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.471 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.471 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.472 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.472 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.472 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.472 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.472 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.473 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.473 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.473 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.474 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.474 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.474 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.474 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.474 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.475 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.475 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.475 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.475 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.475 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.476 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.476 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.476 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.476 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.476 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.477 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.477 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.477 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.477 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.478 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.478 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.478 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.478 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.478 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.479 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.479 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.479 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.479 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.479 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.480 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.480 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.480 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.480 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.480 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.481 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.481 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.481 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.481 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.481 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.481 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.482 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.482 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.482 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.482 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.482 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.483 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.483 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.483 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.483 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.483 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.483 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.484 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.484 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.484 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.484 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.484 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.485 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.485 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.485 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.485 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.485 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.486 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.486 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.486 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.486 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.486 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.486 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.487 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.487 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.487 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.487 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.487 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.487 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.488 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.488 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.488 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.488 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.488 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.489 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.489 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.489 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.489 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.489 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.490 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.490 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.490 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.490 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.490 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.490 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.491 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.491 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.491 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.491 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.491 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.492 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.492 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.492 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.492 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.492 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.492 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.493 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.493 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.493 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.493 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.494 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.494 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.494 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.494 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.495 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.495 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.495 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.495 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.495 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.496 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.496 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.496 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.496 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.497 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.497 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.497 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.497 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.497 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.498 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.498 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.498 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.498 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.499 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.499 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.499 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.499 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.500 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.500 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.500 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.500 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.501 221342 WARNING oslo_config.cfg [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 31 07:07:27 compute-1 nova_compute[221338]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 31 07:07:27 compute-1 nova_compute[221338]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 31 07:07:27 compute-1 nova_compute[221338]: and ``live_migration_inbound_addr`` respectively.
Jan 31 07:07:27 compute-1 nova_compute[221338]: ).  Its value may be silently ignored in the future.
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.501 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.502 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.502 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.502 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.502 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.503 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.503 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.503 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.503 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.504 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.504 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.504 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.504 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.504 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.505 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.505 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.505 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.505 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.505 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.rbd_secret_uuid        = ef73c6e0-6d85-55c2-9347-1f544d3e3d3a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.505 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.506 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.506 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.506 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.506 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.507 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.507 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.507 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.507 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.508 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.508 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.508 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.508 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.509 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.509 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.509 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.509 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.510 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.510 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.510 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.510 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.510 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.511 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.511 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.511 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.511 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.512 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.512 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.512 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.512 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.512 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.513 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.513 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.513 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.513 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.513 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.514 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.514 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.514 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.514 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.515 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.515 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.515 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.515 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.515 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.516 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.516 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.516 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.516 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.516 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.517 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.517 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.517 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.517 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.517 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.517 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.518 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.518 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.518 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.518 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.518 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.519 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.519 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.519 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.519 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.520 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.520 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.520 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.520 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.520 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.521 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.521 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.521 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.521 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.521 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.522 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.522 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.522 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.522 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.522 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.522 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.523 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.523 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.523 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.523 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.523 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.524 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.524 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.524 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.524 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.525 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.525 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.525 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.525 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.525 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.526 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.526 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.526 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.526 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.526 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.526 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.527 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.527 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.527 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.527 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.528 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.528 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.528 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.528 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.528 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.529 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.529 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.529 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.529 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.529 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.530 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.530 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.530 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.530 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.530 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.531 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.531 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.531 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.531 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.532 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.532 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.532 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.532 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.532 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.533 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.533 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.533 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.533 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.533 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.534 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.534 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.534 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.534 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.534 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.535 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.535 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.535 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.535 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.536 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.536 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.536 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.536 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.536 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.536 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.537 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.537 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.537 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.537 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.538 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.538 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.538 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.538 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.539 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.539 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.539 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.539 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.540 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.540 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.540 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.540 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.541 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.541 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.541 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.541 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.542 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.542 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.542 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.542 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.542 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.543 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.543 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.543 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.543 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.544 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.544 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.544 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.545 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.545 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.545 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.545 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.545 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.546 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.546 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.546 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.546 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.547 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.547 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.547 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.547 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.548 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.548 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.548 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.548 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.548 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.549 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.549 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.549 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.549 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.550 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.550 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.550 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.550 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.550 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.551 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.551 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.551 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.551 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.552 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.552 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.552 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.552 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.552 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.553 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.553 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.553 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.553 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.553 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.554 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.554 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.554 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.554 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.554 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.555 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.555 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.555 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.555 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.556 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.556 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.556 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.556 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.557 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.557 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.557 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.557 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.557 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.557 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.558 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.558 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.558 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.558 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.558 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.559 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.559 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.559 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.559 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.560 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.560 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.560 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.560 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.560 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.560 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.561 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.561 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.561 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.561 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.561 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.562 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.562 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.562 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.562 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.562 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.562 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.563 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.563 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.563 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.563 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.564 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.564 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.564 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.564 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.564 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.565 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.565 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.565 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.565 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.565 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.566 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.566 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.566 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.566 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.566 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.567 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.567 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.567 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.567 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.567 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.568 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.568 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.568 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.568 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.569 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.569 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.569 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.569 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.569 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.570 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.570 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.570 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.570 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.570 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.571 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.571 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.571 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.571 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.571 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.571 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.572 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.572 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.572 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.572 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.573 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.573 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.573 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.573 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.573 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.574 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.574 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.574 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.574 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.574 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.574 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.575 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.575 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.575 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.575 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.576 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.576 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.576 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.576 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.576 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.577 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.577 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.577 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.577 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.577 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.578 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.578 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.578 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.578 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.578 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.579 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.579 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.579 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.579 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.579 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.580 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.580 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.580 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.580 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.581 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.581 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.581 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.581 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.581 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.582 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.582 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.582 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.582 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.582 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.583 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.583 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.583 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.583 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.583 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.584 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.584 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.584 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.584 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.584 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.585 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.585 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.585 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.585 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.585 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.586 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.586 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.586 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.586 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.586 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.587 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.587 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.587 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.587 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.587 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.588 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.588 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.588 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.588 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.588 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.588 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.589 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.589 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.589 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.589 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.589 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.590 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.590 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.590 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.590 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.590 221342 DEBUG oslo_service.service [None req-e841096c-fe2d-4d77-bb2d-e17dd842c7b9 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 07:07:27 compute-1 nova_compute[221338]: 2026-01-31 07:07:27.592 221342 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Jan 31 07:07:28 compute-1 nova_compute[221338]: 2026-01-31 07:07:28.011 221342 DEBUG nova.virt.libvirt.host [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 31 07:07:28 compute-1 nova_compute[221338]: 2026-01-31 07:07:28.012 221342 DEBUG nova.virt.libvirt.host [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 31 07:07:28 compute-1 nova_compute[221338]: 2026-01-31 07:07:28.012 221342 DEBUG nova.virt.libvirt.host [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 31 07:07:28 compute-1 nova_compute[221338]: 2026-01-31 07:07:28.013 221342 DEBUG nova.virt.libvirt.host [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 31 07:07:28 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 07:07:28 compute-1 systemd[1]: Started libvirt QEMU daemon.
Jan 31 07:07:28 compute-1 nova_compute[221338]: 2026-01-31 07:07:28.080 221342 DEBUG nova.virt.libvirt.host [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f2a7f6d2910> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 31 07:07:28 compute-1 nova_compute[221338]: 2026-01-31 07:07:28.082 221342 DEBUG nova.virt.libvirt.host [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f2a7f6d2910> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 31 07:07:28 compute-1 nova_compute[221338]: 2026-01-31 07:07:28.086 221342 INFO nova.virt.libvirt.driver [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Connection event '1' reason 'None'
Jan 31 07:07:28 compute-1 nova_compute[221338]: 2026-01-31 07:07:28.808 221342 WARNING nova.virt.libvirt.driver [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Jan 31 07:07:28 compute-1 nova_compute[221338]: 2026-01-31 07:07:28.809 221342 DEBUG nova.virt.libvirt.volume.mount [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 31 07:07:28 compute-1 nova_compute[221338]: 2026-01-31 07:07:28.959 221342 INFO nova.virt.libvirt.host [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Libvirt host capabilities <capabilities>
Jan 31 07:07:28 compute-1 nova_compute[221338]: 
Jan 31 07:07:28 compute-1 nova_compute[221338]:   <host>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <uuid>04875c52-09e7-4b89-9409-fc64b5773d83</uuid>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <cpu>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <arch>x86_64</arch>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model>EPYC-Rome-v4</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <vendor>AMD</vendor>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <microcode version='16777317'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <signature family='23' model='49' stepping='0'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='x2apic'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='tsc-deadline'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='osxsave'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='hypervisor'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='tsc_adjust'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='spec-ctrl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='stibp'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='arch-capabilities'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='ssbd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='cmp_legacy'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='topoext'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='virt-ssbd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='lbrv'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='tsc-scale'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='vmcb-clean'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='pause-filter'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='pfthreshold'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='svme-addr-chk'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='rdctl-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='skip-l1dfl-vmentry'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='mds-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature name='pschange-mc-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <pages unit='KiB' size='4'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <pages unit='KiB' size='2048'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <pages unit='KiB' size='1048576'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     </cpu>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <power_management>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <suspend_mem/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     </power_management>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <iommu support='no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <migration_features>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <live/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <uri_transports>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <uri_transport>tcp</uri_transport>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <uri_transport>rdma</uri_transport>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </uri_transports>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     </migration_features>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <topology>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <cells num='1'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <cell id='0'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:           <memory unit='KiB'>7864292</memory>
Jan 31 07:07:28 compute-1 nova_compute[221338]:           <pages unit='KiB' size='4'>1966073</pages>
Jan 31 07:07:28 compute-1 nova_compute[221338]:           <pages unit='KiB' size='2048'>0</pages>
Jan 31 07:07:28 compute-1 nova_compute[221338]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 31 07:07:28 compute-1 nova_compute[221338]:           <distances>
Jan 31 07:07:28 compute-1 nova_compute[221338]:             <sibling id='0' value='10'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:           </distances>
Jan 31 07:07:28 compute-1 nova_compute[221338]:           <cpus num='8'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:           </cpus>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         </cell>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </cells>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     </topology>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <cache>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     </cache>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <secmodel>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model>selinux</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <doi>0</doi>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     </secmodel>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <secmodel>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model>dac</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <doi>0</doi>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     </secmodel>
Jan 31 07:07:28 compute-1 nova_compute[221338]:   </host>
Jan 31 07:07:28 compute-1 nova_compute[221338]: 
Jan 31 07:07:28 compute-1 nova_compute[221338]:   <guest>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <os_type>hvm</os_type>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <arch name='i686'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <wordsize>32</wordsize>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <domain type='qemu'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <domain type='kvm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     </arch>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <features>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <pae/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <nonpae/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <acpi default='on' toggle='yes'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <apic default='on' toggle='no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <cpuselection/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <deviceboot/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <disksnapshot default='on' toggle='no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <externalSnapshot/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     </features>
Jan 31 07:07:28 compute-1 nova_compute[221338]:   </guest>
Jan 31 07:07:28 compute-1 nova_compute[221338]: 
Jan 31 07:07:28 compute-1 nova_compute[221338]:   <guest>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <os_type>hvm</os_type>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <arch name='x86_64'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <wordsize>64</wordsize>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <domain type='qemu'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <domain type='kvm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     </arch>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <features>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <acpi default='on' toggle='yes'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <apic default='on' toggle='no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <cpuselection/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <deviceboot/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <disksnapshot default='on' toggle='no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <externalSnapshot/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     </features>
Jan 31 07:07:28 compute-1 nova_compute[221338]:   </guest>
Jan 31 07:07:28 compute-1 nova_compute[221338]: 
Jan 31 07:07:28 compute-1 nova_compute[221338]: </capabilities>
Jan 31 07:07:28 compute-1 nova_compute[221338]: 
Jan 31 07:07:28 compute-1 nova_compute[221338]: 2026-01-31 07:07:28.964 221342 DEBUG nova.virt.libvirt.host [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 31 07:07:28 compute-1 nova_compute[221338]: 2026-01-31 07:07:28.985 221342 DEBUG nova.virt.libvirt.host [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 31 07:07:28 compute-1 nova_compute[221338]: <domainCapabilities>
Jan 31 07:07:28 compute-1 nova_compute[221338]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 07:07:28 compute-1 nova_compute[221338]:   <domain>kvm</domain>
Jan 31 07:07:28 compute-1 nova_compute[221338]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 31 07:07:28 compute-1 nova_compute[221338]:   <arch>i686</arch>
Jan 31 07:07:28 compute-1 nova_compute[221338]:   <vcpu max='4096'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:   <iothreads supported='yes'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:   <os supported='yes'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <enum name='firmware'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <loader supported='yes'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <enum name='type'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <value>rom</value>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <value>pflash</value>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <enum name='readonly'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <value>yes</value>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <value>no</value>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <enum name='secure'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <value>no</value>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     </loader>
Jan 31 07:07:28 compute-1 nova_compute[221338]:   </os>
Jan 31 07:07:28 compute-1 nova_compute[221338]:   <cpu>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <mode name='host-passthrough' supported='yes'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <enum name='hostPassthroughMigratable'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <value>on</value>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <value>off</value>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     </mode>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <mode name='maximum' supported='yes'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <enum name='maximumMigratable'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <value>on</value>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <value>off</value>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     </mode>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <mode name='host-model' supported='yes'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <vendor>AMD</vendor>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='x2apic'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='hypervisor'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='stibp'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='ssbd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='overflow-recov'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='succor'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='ibrs'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='lbrv'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='tsc-scale'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='flushbyasid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='pause-filter'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='pfthreshold'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <feature policy='disable' name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     </mode>
Jan 31 07:07:28 compute-1 nova_compute[221338]:     <mode name='custom' supported='yes'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Broadwell'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Broadwell-IBRS'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Broadwell-noTSX'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Broadwell-v1'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Broadwell-v2'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Broadwell-v3'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Broadwell-v4'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='ClearwaterForest'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='bhi-ctrl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='bhi-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ddpd-u'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='intel-psfd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ipred-ctrl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='lam'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='rrsba-ctrl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='sha512'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='sm3'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='sm4'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='ClearwaterForest-v1'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='bhi-ctrl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='bhi-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ddpd-u'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='intel-psfd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ipred-ctrl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='lam'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='rrsba-ctrl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='sha512'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='sm3'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='sm4'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Cooperlake'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Cooperlake-v1'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Cooperlake-v2'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Denverton'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='mpx'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Denverton-v1'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='mpx'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Denverton-v2'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Denverton-v3'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Dhyana-v2'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='EPYC-Genoa'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='auto-ibrs'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='auto-ibrs'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='auto-ibrs'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='perfmon-v2'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='EPYC-Milan'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='EPYC-Milan-v1'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='EPYC-Milan-v2'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='EPYC-Milan-v3'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='EPYC-Rome'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='EPYC-Rome-v1'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='EPYC-Rome-v2'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='EPYC-Rome-v3'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='EPYC-Turin'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='auto-ibrs'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ibpb-brtype'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='perfmon-v2'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='prefetchi'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='sbpb'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='EPYC-Turin-v1'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='auto-ibrs'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ibpb-brtype'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='perfmon-v2'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='prefetchi'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='sbpb'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='EPYC-v3'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='EPYC-v4'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='EPYC-v5'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='GraniteRapids'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amx-fp16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='GraniteRapids-v1'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amx-fp16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='GraniteRapids-v2'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amx-fp16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx10'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx10-128'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx10-256'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx10-512'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='GraniteRapids-v3'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amx-fp16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx10'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx10-128'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx10-256'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx10-512'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Haswell'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Haswell-IBRS'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Haswell-noTSX'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Haswell-v1'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Haswell-v2'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Haswell-v3'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 07:07:28 compute-1 nova_compute[221338]:       <blockers model='Haswell-v4'>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:28 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v5'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v6'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v7'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='IvyBridge'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='IvyBridge-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='IvyBridge-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='IvyBridge-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='KnightsMill'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-4fmaps'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-4vnniw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512er'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512pf'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='KnightsMill-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-4fmaps'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-4vnniw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512er'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512pf'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Opteron_G4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fma4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xop'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Opteron_G4-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fma4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xop'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Opteron_G5'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fma4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tbm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xop'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Opteron_G5-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fma4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tbm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xop'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SapphireRapids'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SapphireRapids-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SapphireRapids-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SapphireRapids-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SapphireRapids-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SierraForest'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SierraForest-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SierraForest-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bhi-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='intel-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ipred-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='lam'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rrsba-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SierraForest-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bhi-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='intel-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ipred-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='lam'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rrsba-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-v5'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Snowridge'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='core-capability'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mpx'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='split-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Snowridge-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='core-capability'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mpx'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='split-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Snowridge-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='core-capability'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='split-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Snowridge-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='core-capability'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='split-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Snowridge-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='athlon'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnow'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnowext'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='athlon-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnow'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnowext'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='core2duo'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='core2duo-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='coreduo'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='coreduo-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='n270'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='n270-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='phenom'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnow'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnowext'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='phenom-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnow'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnowext'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </mode>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   </cpu>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <memoryBacking supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <enum name='sourceType'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <value>file</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <value>anonymous</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <value>memfd</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   </memoryBacking>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <devices>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <disk supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='diskDevice'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>disk</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>cdrom</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>floppy</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>lun</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='bus'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>fdc</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>scsi</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>usb</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>sata</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='model'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio-transitional</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio-non-transitional</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </disk>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <graphics supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='type'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vnc</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>egl-headless</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>dbus</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </graphics>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <video supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='modelType'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vga</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>cirrus</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>none</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>bochs</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>ramfb</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </video>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <hostdev supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='mode'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>subsystem</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='startupPolicy'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>default</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>mandatory</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>requisite</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>optional</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='subsysType'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>usb</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>pci</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>scsi</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='capsType'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='pciBackend'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </hostdev>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <rng supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='model'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio-transitional</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio-non-transitional</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='backendModel'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>random</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>egd</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>builtin</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </rng>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <filesystem supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='driverType'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>path</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>handle</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtiofs</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </filesystem>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <tpm supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='model'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>tpm-tis</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>tpm-crb</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='backendModel'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>emulator</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>external</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='backendVersion'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>2.0</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </tpm>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <redirdev supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='bus'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>usb</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </redirdev>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <channel supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='type'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>pty</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>unix</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </channel>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <crypto supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='model'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='type'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>qemu</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='backendModel'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>builtin</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </crypto>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <interface supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='backendType'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>default</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>passt</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </interface>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <panic supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='model'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>isa</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>hyperv</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </panic>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <console supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='type'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>null</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vc</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>pty</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>dev</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>file</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>pipe</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>stdio</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>udp</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>tcp</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>unix</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>qemu-vdagent</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>dbus</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </console>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   </devices>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <features>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <gic supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <vmcoreinfo supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <genid supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <backingStoreInput supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <backup supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <async-teardown supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <s390-pv supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <ps2 supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <tdx supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <sev supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <sgx supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <hyperv supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='features'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>relaxed</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vapic</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>spinlocks</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vpindex</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>runtime</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>synic</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>stimer</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>reset</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vendor_id</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>frequencies</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>reenlightenment</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>tlbflush</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>ipi</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>avic</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>emsr_bitmap</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>xmm_input</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <defaults>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <spinlocks>4095</spinlocks>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <stimer_direct>on</stimer_direct>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </defaults>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </hyperv>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <launchSecurity supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   </features>
Jan 31 07:07:29 compute-1 nova_compute[221338]: </domainCapabilities>
Jan 31 07:07:29 compute-1 nova_compute[221338]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 07:07:29 compute-1 nova_compute[221338]: 2026-01-31 07:07:28.991 221342 DEBUG nova.virt.libvirt.host [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 31 07:07:29 compute-1 nova_compute[221338]: <domainCapabilities>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <domain>kvm</domain>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <arch>i686</arch>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <vcpu max='240'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <iothreads supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <os supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <enum name='firmware'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <loader supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='type'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>rom</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>pflash</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='readonly'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>yes</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>no</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='secure'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>no</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </loader>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   </os>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <cpu>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <mode name='host-passthrough' supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='hostPassthroughMigratable'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>on</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>off</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </mode>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <mode name='maximum' supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='maximumMigratable'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>on</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>off</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </mode>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <mode name='host-model' supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <vendor>AMD</vendor>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='x2apic'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='hypervisor'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='stibp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='ssbd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='overflow-recov'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='succor'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='ibrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='lbrv'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='tsc-scale'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='flushbyasid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='pause-filter'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='pfthreshold'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='disable' name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </mode>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <mode name='custom' supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-noTSX'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='ClearwaterForest'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bhi-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bhi-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ddpd-u'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='intel-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ipred-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='lam'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rrsba-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sha512'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sm3'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sm4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='ClearwaterForest-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bhi-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bhi-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ddpd-u'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='intel-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ipred-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='lam'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rrsba-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sha512'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sm3'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sm4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cooperlake'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cooperlake-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cooperlake-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 07:07:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:29.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Denverton'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mpx'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Denverton-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mpx'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Denverton-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Denverton-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Dhyana-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Genoa'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='auto-ibrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='auto-ibrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='auto-ibrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='perfmon-v2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Milan'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Milan-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Milan-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Milan-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Rome'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Rome-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Rome-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Rome-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Turin'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='auto-ibrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibpb-brtype'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='perfmon-v2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbpb'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Turin-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='auto-ibrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibpb-brtype'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='perfmon-v2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbpb'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-v5'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='GraniteRapids'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='GraniteRapids-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='GraniteRapids-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10-128'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10-256'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10-512'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='GraniteRapids-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10-128'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10-256'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10-512'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-noTSX'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 07:07:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v5'>
Jan 31 07:07:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:29.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v6'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v7'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='IvyBridge'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='IvyBridge-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='IvyBridge-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='IvyBridge-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='KnightsMill'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-4fmaps'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-4vnniw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512er'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512pf'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='KnightsMill-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-4fmaps'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-4vnniw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512er'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512pf'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Opteron_G4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fma4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xop'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Opteron_G4-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fma4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xop'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Opteron_G5'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fma4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tbm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xop'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Opteron_G5-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fma4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tbm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xop'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SapphireRapids'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SapphireRapids-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SapphireRapids-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SapphireRapids-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SapphireRapids-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SierraForest'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SierraForest-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SierraForest-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bhi-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='intel-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ipred-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='lam'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rrsba-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SierraForest-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bhi-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='intel-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ipred-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='lam'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rrsba-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-v5'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Snowridge'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='core-capability'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mpx'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='split-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Snowridge-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='core-capability'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mpx'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='split-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Snowridge-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='core-capability'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='split-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Snowridge-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='core-capability'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='split-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Snowridge-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='athlon'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnow'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnowext'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='athlon-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnow'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnowext'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='core2duo'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='core2duo-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='coreduo'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='coreduo-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='n270'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='n270-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='phenom'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnow'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnowext'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='phenom-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnow'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnowext'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </mode>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   </cpu>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <memoryBacking supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <enum name='sourceType'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <value>file</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <value>anonymous</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <value>memfd</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   </memoryBacking>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <devices>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <disk supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='diskDevice'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>disk</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>cdrom</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>floppy</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>lun</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='bus'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>ide</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>fdc</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>scsi</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>usb</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>sata</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='model'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio-transitional</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio-non-transitional</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </disk>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <graphics supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='type'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vnc</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>egl-headless</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>dbus</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </graphics>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <video supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='modelType'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vga</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>cirrus</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>none</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>bochs</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>ramfb</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </video>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <hostdev supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='mode'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>subsystem</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='startupPolicy'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>default</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>mandatory</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>requisite</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>optional</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='subsysType'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>usb</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>pci</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>scsi</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='capsType'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='pciBackend'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </hostdev>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <rng supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='model'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio-transitional</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio-non-transitional</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='backendModel'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>random</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>egd</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>builtin</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </rng>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <filesystem supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='driverType'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>path</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>handle</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtiofs</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </filesystem>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <tpm supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='model'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>tpm-tis</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>tpm-crb</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='backendModel'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>emulator</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>external</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='backendVersion'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>2.0</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </tpm>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <redirdev supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='bus'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>usb</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </redirdev>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <channel supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='type'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>pty</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>unix</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </channel>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <crypto supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='model'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='type'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>qemu</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='backendModel'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>builtin</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </crypto>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <interface supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='backendType'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>default</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>passt</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </interface>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <panic supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='model'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>isa</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>hyperv</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </panic>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <console supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='type'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>null</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vc</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>pty</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>dev</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>file</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>pipe</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>stdio</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>udp</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>tcp</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>unix</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>qemu-vdagent</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>dbus</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </console>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   </devices>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <features>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <gic supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <vmcoreinfo supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <genid supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <backingStoreInput supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <backup supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <async-teardown supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <s390-pv supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <ps2 supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <tdx supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <sev supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <sgx supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <hyperv supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='features'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>relaxed</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vapic</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>spinlocks</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vpindex</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>runtime</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>synic</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>stimer</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>reset</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vendor_id</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>frequencies</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>reenlightenment</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>tlbflush</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>ipi</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>avic</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>emsr_bitmap</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>xmm_input</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <defaults>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <spinlocks>4095</spinlocks>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <stimer_direct>on</stimer_direct>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </defaults>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </hyperv>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <launchSecurity supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   </features>
Jan 31 07:07:29 compute-1 nova_compute[221338]: </domainCapabilities>
Jan 31 07:07:29 compute-1 nova_compute[221338]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 07:07:29 compute-1 nova_compute[221338]: 2026-01-31 07:07:29.041 221342 DEBUG nova.virt.libvirt.host [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 31 07:07:29 compute-1 nova_compute[221338]: 2026-01-31 07:07:29.046 221342 DEBUG nova.virt.libvirt.host [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 31 07:07:29 compute-1 nova_compute[221338]: <domainCapabilities>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <domain>kvm</domain>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <arch>x86_64</arch>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <vcpu max='4096'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <iothreads supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <os supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <enum name='firmware'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <value>efi</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <loader supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='type'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>rom</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>pflash</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='readonly'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>yes</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>no</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='secure'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>yes</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>no</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </loader>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   </os>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <cpu>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <mode name='host-passthrough' supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='hostPassthroughMigratable'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>on</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>off</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </mode>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <mode name='maximum' supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='maximumMigratable'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>on</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>off</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </mode>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <mode name='host-model' supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <vendor>AMD</vendor>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='x2apic'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='hypervisor'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='stibp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='ssbd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='overflow-recov'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='succor'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='ibrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='lbrv'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='tsc-scale'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='flushbyasid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='pause-filter'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='pfthreshold'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='disable' name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </mode>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <mode name='custom' supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-noTSX'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='ClearwaterForest'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bhi-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bhi-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ddpd-u'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='intel-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ipred-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='lam'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rrsba-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sha512'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sm3'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sm4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='ClearwaterForest-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bhi-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bhi-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ddpd-u'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='intel-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ipred-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='lam'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rrsba-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sha512'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sm3'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sm4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cooperlake'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cooperlake-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cooperlake-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Denverton'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mpx'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Denverton-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mpx'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Denverton-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Denverton-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Dhyana-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Genoa'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='auto-ibrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='auto-ibrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='auto-ibrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='perfmon-v2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Milan'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Milan-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Milan-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Milan-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Rome'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Rome-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Rome-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Rome-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Turin'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='auto-ibrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibpb-brtype'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='perfmon-v2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbpb'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Turin-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='auto-ibrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibpb-brtype'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='perfmon-v2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbpb'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-v5'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='GraniteRapids'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='GraniteRapids-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='GraniteRapids-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10-128'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10-256'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10-512'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='GraniteRapids-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10-128'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10-256'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10-512'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-noTSX'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v5'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v6'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v7'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='IvyBridge'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='IvyBridge-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='IvyBridge-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='IvyBridge-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='KnightsMill'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-4fmaps'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-4vnniw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512er'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512pf'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='KnightsMill-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-4fmaps'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-4vnniw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512er'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512pf'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Opteron_G4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fma4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xop'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Opteron_G4-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fma4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xop'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Opteron_G5'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fma4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tbm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xop'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Opteron_G5-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fma4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tbm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xop'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SapphireRapids'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SapphireRapids-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SapphireRapids-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SapphireRapids-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SapphireRapids-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SierraForest'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SierraForest-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SierraForest-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bhi-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='intel-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ipred-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='lam'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rrsba-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SierraForest-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bhi-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='intel-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ipred-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='lam'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rrsba-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-v5'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Snowridge'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='core-capability'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mpx'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='split-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Snowridge-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='core-capability'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mpx'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='split-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Snowridge-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='core-capability'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='split-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Snowridge-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='core-capability'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='split-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Snowridge-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='athlon'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnow'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnowext'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='athlon-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnow'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnowext'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='core2duo'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='core2duo-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='coreduo'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='coreduo-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='n270'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='n270-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='phenom'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnow'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnowext'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='phenom-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnow'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnowext'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </mode>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   </cpu>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <memoryBacking supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <enum name='sourceType'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <value>file</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <value>anonymous</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <value>memfd</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   </memoryBacking>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <devices>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <disk supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='diskDevice'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>disk</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>cdrom</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>floppy</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>lun</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='bus'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>fdc</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>scsi</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>usb</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>sata</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='model'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio-transitional</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio-non-transitional</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </disk>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <graphics supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='type'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vnc</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>egl-headless</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>dbus</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </graphics>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <video supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='modelType'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vga</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>cirrus</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>none</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>bochs</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>ramfb</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </video>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <hostdev supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='mode'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>subsystem</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='startupPolicy'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>default</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>mandatory</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>requisite</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>optional</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='subsysType'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>usb</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>pci</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>scsi</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='capsType'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='pciBackend'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </hostdev>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <rng supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='model'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio-transitional</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio-non-transitional</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='backendModel'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>random</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>egd</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>builtin</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </rng>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <filesystem supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='driverType'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>path</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>handle</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtiofs</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </filesystem>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <tpm supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='model'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>tpm-tis</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>tpm-crb</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='backendModel'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>emulator</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>external</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='backendVersion'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>2.0</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </tpm>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <redirdev supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='bus'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>usb</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </redirdev>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <channel supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='type'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>pty</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>unix</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </channel>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <crypto supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='model'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='type'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>qemu</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='backendModel'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>builtin</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </crypto>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <interface supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='backendType'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>default</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>passt</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </interface>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <panic supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='model'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>isa</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>hyperv</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </panic>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <console supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='type'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>null</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vc</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>pty</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>dev</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>file</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>pipe</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>stdio</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>udp</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>tcp</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>unix</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>qemu-vdagent</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>dbus</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </console>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   </devices>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <features>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <gic supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <vmcoreinfo supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <genid supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <backingStoreInput supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <backup supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <async-teardown supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <s390-pv supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <ps2 supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <tdx supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <sev supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <sgx supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <hyperv supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='features'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>relaxed</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vapic</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>spinlocks</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vpindex</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>runtime</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>synic</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>stimer</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>reset</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vendor_id</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>frequencies</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>reenlightenment</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>tlbflush</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>ipi</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>avic</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>emsr_bitmap</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>xmm_input</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <defaults>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <spinlocks>4095</spinlocks>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <stimer_direct>on</stimer_direct>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </defaults>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </hyperv>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <launchSecurity supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   </features>
Jan 31 07:07:29 compute-1 nova_compute[221338]: </domainCapabilities>
Jan 31 07:07:29 compute-1 nova_compute[221338]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 07:07:29 compute-1 nova_compute[221338]: 2026-01-31 07:07:29.118 221342 DEBUG nova.virt.libvirt.host [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 31 07:07:29 compute-1 nova_compute[221338]: <domainCapabilities>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <domain>kvm</domain>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <arch>x86_64</arch>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <vcpu max='240'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <iothreads supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <os supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <enum name='firmware'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <loader supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='type'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>rom</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>pflash</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='readonly'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>yes</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>no</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='secure'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>no</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </loader>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   </os>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <cpu>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <mode name='host-passthrough' supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='hostPassthroughMigratable'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>on</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>off</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </mode>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <mode name='maximum' supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='maximumMigratable'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>on</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>off</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </mode>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <mode name='host-model' supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <vendor>AMD</vendor>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='x2apic'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='hypervisor'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='stibp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='ssbd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='overflow-recov'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='succor'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='ibrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='lbrv'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='tsc-scale'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='flushbyasid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='pause-filter'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='pfthreshold'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <feature policy='disable' name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </mode>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <mode name='custom' supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-noTSX'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Broadwell-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='ClearwaterForest'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bhi-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bhi-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ddpd-u'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='intel-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ipred-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='lam'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rrsba-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sha512'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sm3'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sm4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='ClearwaterForest-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bhi-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bhi-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ddpd-u'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='intel-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ipred-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='lam'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rrsba-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sha512'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sm3'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sm4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cooperlake'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cooperlake-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Cooperlake-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Denverton'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mpx'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Denverton-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mpx'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Denverton-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Denverton-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Dhyana-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Genoa'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='auto-ibrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='auto-ibrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='auto-ibrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='perfmon-v2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Milan'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Milan-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Milan-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Milan-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Rome'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Rome-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Rome-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Rome-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Turin'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='auto-ibrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibpb-brtype'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='perfmon-v2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbpb'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-Turin-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amd-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='auto-ibrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibpb-brtype'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='no-nested-data-bp'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='null-sel-clr-base'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='perfmon-v2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbpb'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='stibp-always-on'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='EPYC-v5'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='GraniteRapids'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='GraniteRapids-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='GraniteRapids-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10-128'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10-256'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10-512'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='GraniteRapids-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10-128'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10-256'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx10-512'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='prefetchiti'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-noTSX'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Haswell-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v5'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v6'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Icelake-Server-v7'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='IvyBridge'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='IvyBridge-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='IvyBridge-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='IvyBridge-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='KnightsMill'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-4fmaps'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-4vnniw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512er'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512pf'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='KnightsMill-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-4fmaps'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-4vnniw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512er'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512pf'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Opteron_G4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fma4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xop'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Opteron_G4-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fma4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xop'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Opteron_G5'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fma4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tbm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xop'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Opteron_G5-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fma4'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tbm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xop'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SapphireRapids'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SapphireRapids-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SapphireRapids-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SapphireRapids-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SapphireRapids-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='amx-tile'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-bf16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-fp16'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bitalg'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vbmi2'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrc'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fzrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='la57'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='taa-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='tsx-ldtrk'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SierraForest'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SierraForest-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SierraForest-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bhi-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='intel-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ipred-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='lam'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rrsba-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='SierraForest-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ifma'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-ne-convert'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx-vnni-int8'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bhi-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='bus-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cmpccxadd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fbsdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='fsrs'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ibrs-all'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='intel-psfd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ipred-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='lam'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mcdt-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pbrsb-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='psdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rrsba-ctrl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='serialize'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vaes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='vpclmulqdq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Client-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='hle'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='rtm'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Skylake-Server-v5'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512bw'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512cd'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512dq'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512f'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='avx512vl'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='invpcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pcid'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='pku'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Snowridge'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='core-capability'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mpx'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='split-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Snowridge-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='core-capability'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='mpx'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='split-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Snowridge-v2'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='core-capability'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='split-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Snowridge-v3'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='core-capability'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='split-lock-detect'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='Snowridge-v4'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='cldemote'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='erms'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='gfni'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdir64b'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='movdiri'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='xsaves'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='athlon'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnow'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnowext'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='athlon-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnow'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnowext'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='core2duo'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='core2duo-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='coreduo'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='coreduo-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='n270'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='n270-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='ss'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='phenom'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnow'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnowext'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <blockers model='phenom-v1'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnow'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <feature name='3dnowext'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </blockers>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </mode>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   </cpu>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <memoryBacking supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <enum name='sourceType'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <value>file</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <value>anonymous</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <value>memfd</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   </memoryBacking>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <devices>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <disk supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='diskDevice'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>disk</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>cdrom</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>floppy</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>lun</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='bus'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>ide</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>fdc</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>scsi</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>usb</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>sata</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='model'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio-transitional</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio-non-transitional</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </disk>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <graphics supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='type'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vnc</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>egl-headless</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>dbus</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </graphics>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <video supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='modelType'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vga</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>cirrus</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>none</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>bochs</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>ramfb</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </video>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <hostdev supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='mode'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>subsystem</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='startupPolicy'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>default</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>mandatory</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>requisite</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>optional</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='subsysType'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>usb</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>pci</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>scsi</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='capsType'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='pciBackend'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </hostdev>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <rng supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='model'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio-transitional</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtio-non-transitional</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='backendModel'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>random</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>egd</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>builtin</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </rng>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <filesystem supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='driverType'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>path</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>handle</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>virtiofs</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </filesystem>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <tpm supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='model'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>tpm-tis</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>tpm-crb</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='backendModel'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>emulator</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>external</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='backendVersion'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>2.0</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </tpm>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <redirdev supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='bus'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>usb</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </redirdev>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <channel supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='type'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>pty</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>unix</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </channel>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <crypto supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='model'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='type'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>qemu</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='backendModel'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>builtin</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </crypto>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <interface supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='backendType'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>default</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>passt</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </interface>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <panic supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='model'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>isa</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>hyperv</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </panic>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <console supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='type'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>null</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vc</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>pty</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>dev</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>file</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>pipe</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>stdio</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>udp</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>tcp</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>unix</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>qemu-vdagent</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>dbus</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </console>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   </devices>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <features>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <gic supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <vmcoreinfo supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <genid supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <backingStoreInput supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <backup supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <async-teardown supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <s390-pv supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <ps2 supported='yes'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <tdx supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <sev supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <sgx supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <hyperv supported='yes'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <enum name='features'>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>relaxed</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vapic</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>spinlocks</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vpindex</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>runtime</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>synic</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>stimer</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>reset</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>vendor_id</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>frequencies</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>reenlightenment</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>tlbflush</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>ipi</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>avic</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>emsr_bitmap</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <value>xmm_input</value>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </enum>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       <defaults>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <spinlocks>4095</spinlocks>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <stimer_direct>on</stimer_direct>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 07:07:29 compute-1 nova_compute[221338]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 07:07:29 compute-1 nova_compute[221338]:       </defaults>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     </hyperv>
Jan 31 07:07:29 compute-1 nova_compute[221338]:     <launchSecurity supported='no'/>
Jan 31 07:07:29 compute-1 nova_compute[221338]:   </features>
Jan 31 07:07:29 compute-1 nova_compute[221338]: </domainCapabilities>
Jan 31 07:07:29 compute-1 nova_compute[221338]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 07:07:29 compute-1 nova_compute[221338]: 2026-01-31 07:07:29.182 221342 DEBUG nova.virt.libvirt.host [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 31 07:07:29 compute-1 nova_compute[221338]: 2026-01-31 07:07:29.182 221342 INFO nova.virt.libvirt.host [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Secure Boot support detected
Jan 31 07:07:29 compute-1 nova_compute[221338]: 2026-01-31 07:07:29.184 221342 INFO nova.virt.libvirt.driver [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 31 07:07:29 compute-1 nova_compute[221338]: 2026-01-31 07:07:29.184 221342 INFO nova.virt.libvirt.driver [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 31 07:07:29 compute-1 nova_compute[221338]: 2026-01-31 07:07:29.191 221342 DEBUG nova.virt.libvirt.driver [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] cpu compare xml: <cpu match="exact">
Jan 31 07:07:29 compute-1 nova_compute[221338]:   <model>Nehalem</model>
Jan 31 07:07:29 compute-1 nova_compute[221338]: </cpu>
Jan 31 07:07:29 compute-1 nova_compute[221338]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Jan 31 07:07:29 compute-1 nova_compute[221338]: 2026-01-31 07:07:29.194 221342 DEBUG nova.virt.libvirt.driver [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 31 07:07:29 compute-1 nova_compute[221338]: 2026-01-31 07:07:29.464 221342 INFO nova.virt.node [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Determined node identity 6c25628b-2484-4cb3-b051-815f7248948f from /var/lib/nova/compute_id
Jan 31 07:07:30 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:31.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:31.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:31 compute-1 podman[221443]: 2026-01-31 07:07:31.122474496 +0000 UTC m=+0.049201440 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:07:31 compute-1 nova_compute[221338]: 2026-01-31 07:07:31.769 221342 WARNING nova.compute.manager [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Compute nodes ['6c25628b-2484-4cb3-b051-815f7248948f'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 31 07:07:31 compute-1 nova_compute[221338]: 2026-01-31 07:07:31.836 221342 INFO nova.compute.manager [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 31 07:07:31 compute-1 nova_compute[221338]: 2026-01-31 07:07:31.909 221342 WARNING nova.compute.manager [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Jan 31 07:07:31 compute-1 nova_compute[221338]: 2026-01-31 07:07:31.909 221342 DEBUG oslo_concurrency.lockutils [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:07:31 compute-1 nova_compute[221338]: 2026-01-31 07:07:31.910 221342 DEBUG oslo_concurrency.lockutils [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:07:31 compute-1 nova_compute[221338]: 2026-01-31 07:07:31.910 221342 DEBUG oslo_concurrency.lockutils [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:07:31 compute-1 nova_compute[221338]: 2026-01-31 07:07:31.910 221342 DEBUG nova.compute.resource_tracker [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:07:31 compute-1 nova_compute[221338]: 2026-01-31 07:07:31.910 221342 DEBUG oslo_concurrency.processutils [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:07:32 compute-1 ceph-mon[81728]: pgmap v698: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:32 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:07:32 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4160688059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:07:32 compute-1 nova_compute[221338]: 2026-01-31 07:07:32.343 221342 DEBUG oslo_concurrency.processutils [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:07:32 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 07:07:32 compute-1 systemd[1]: Started libvirt nodedev daemon.
Jan 31 07:07:32 compute-1 nova_compute[221338]: 2026-01-31 07:07:32.581 221342 WARNING nova.virt.libvirt.driver [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:07:32 compute-1 nova_compute[221338]: 2026-01-31 07:07:32.583 221342 DEBUG nova.compute.resource_tracker [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5326MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:07:32 compute-1 nova_compute[221338]: 2026-01-31 07:07:32.583 221342 DEBUG oslo_concurrency.lockutils [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:07:32 compute-1 nova_compute[221338]: 2026-01-31 07:07:32.584 221342 DEBUG oslo_concurrency.lockutils [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:07:32 compute-1 nova_compute[221338]: 2026-01-31 07:07:32.610 221342 WARNING nova.compute.resource_tracker [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] No compute node record for compute-1.ctlplane.example.com:6c25628b-2484-4cb3-b051-815f7248948f: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 6c25628b-2484-4cb3-b051-815f7248948f could not be found.
Jan 31 07:07:32 compute-1 sudo[221634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llqnrqyglgzdszubdciiyustmkjszqth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843252.5669706-3623-46445864847734/AnsiballZ_podman_container.py'
Jan 31 07:07:32 compute-1 sudo[221634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:32 compute-1 nova_compute[221338]: 2026-01-31 07:07:32.970 221342 INFO nova.compute.resource_tracker [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Compute node record created for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com with uuid: 6c25628b-2484-4cb3-b051-815f7248948f
Jan 31 07:07:33 compute-1 python3.9[221636]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 07:07:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:33.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:33.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:33 compute-1 ceph-mon[81728]: pgmap v699: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:33 compute-1 ceph-mon[81728]: pgmap v700: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:33 compute-1 ceph-mon[81728]: pgmap v701: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:33 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 989 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:07:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:33 compute-1 ceph-mon[81728]: pgmap v702: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:33 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/4160688059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:07:33 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/953542065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:07:33 compute-1 systemd[1]: Started libpod-conmon-a3c644e8d19767ebf8de2ab7478ab0cbdf949c047cff8512f994ff77edec337f.scope.
Jan 31 07:07:33 compute-1 systemd[1]: Started libcrun container.
Jan 31 07:07:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc17ffdcab578c559b3eb1b22881ea03185704f5aa6330e7fd3dcda41e47002e/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc17ffdcab578c559b3eb1b22881ea03185704f5aa6330e7fd3dcda41e47002e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc17ffdcab578c559b3eb1b22881ea03185704f5aa6330e7fd3dcda41e47002e/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:33 compute-1 podman[221661]: 2026-01-31 07:07:33.213795875 +0000 UTC m=+0.090206958 container init a3c644e8d19767ebf8de2ab7478ab0cbdf949c047cff8512f994ff77edec337f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 07:07:33 compute-1 podman[221661]: 2026-01-31 07:07:33.220122236 +0000 UTC m=+0.096533309 container start a3c644e8d19767ebf8de2ab7478ab0cbdf949c047cff8512f994ff77edec337f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 07:07:33 compute-1 python3.9[221636]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 31 07:07:33 compute-1 nova_compute_init[221694]: INFO:nova_statedir:Applying nova statedir ownership
Jan 31 07:07:33 compute-1 nova_compute_init[221694]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 31 07:07:33 compute-1 nova_compute_init[221694]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 31 07:07:33 compute-1 nova_compute_init[221694]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 31 07:07:33 compute-1 nova_compute_init[221694]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 31 07:07:33 compute-1 nova_compute_init[221694]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 31 07:07:33 compute-1 nova_compute_init[221694]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 31 07:07:33 compute-1 nova_compute_init[221694]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 31 07:07:33 compute-1 nova_compute_init[221694]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 31 07:07:33 compute-1 nova_compute_init[221694]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 31 07:07:33 compute-1 nova_compute_init[221694]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 31 07:07:33 compute-1 nova_compute_init[221694]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:07:33 compute-1 nova_compute_init[221694]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 31 07:07:33 compute-1 nova_compute_init[221694]: INFO:nova_statedir:Nova statedir ownership complete
Jan 31 07:07:33 compute-1 systemd[1]: libpod-a3c644e8d19767ebf8de2ab7478ab0cbdf949c047cff8512f994ff77edec337f.scope: Deactivated successfully.
Jan 31 07:07:33 compute-1 podman[221676]: 2026-01-31 07:07:33.286609742 +0000 UTC m=+0.091187155 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 07:07:33 compute-1 podman[221721]: 2026-01-31 07:07:33.313323544 +0000 UTC m=+0.024988757 container died a3c644e8d19767ebf8de2ab7478ab0cbdf949c047cff8512f994ff77edec337f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm)
Jan 31 07:07:33 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a3c644e8d19767ebf8de2ab7478ab0cbdf949c047cff8512f994ff77edec337f-userdata-shm.mount: Deactivated successfully.
Jan 31 07:07:33 compute-1 systemd[1]: var-lib-containers-storage-overlay-dc17ffdcab578c559b3eb1b22881ea03185704f5aa6330e7fd3dcda41e47002e-merged.mount: Deactivated successfully.
Jan 31 07:07:33 compute-1 sudo[221634]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:33 compute-1 podman[221721]: 2026-01-31 07:07:33.3620391 +0000 UTC m=+0.073704313 container cleanup a3c644e8d19767ebf8de2ab7478ab0cbdf949c047cff8512f994ff77edec337f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 07:07:33 compute-1 systemd[1]: libpod-conmon-a3c644e8d19767ebf8de2ab7478ab0cbdf949c047cff8512f994ff77edec337f.scope: Deactivated successfully.
Jan 31 07:07:34 compute-1 ceph-mon[81728]: pgmap v703: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:34 compute-1 sshd-session[197885]: Connection closed by 192.168.122.30 port 51202
Jan 31 07:07:34 compute-1 sshd-session[197882]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:07:34 compute-1 systemd[1]: session-49.scope: Deactivated successfully.
Jan 31 07:07:34 compute-1 systemd[1]: session-49.scope: Consumed 1min 42.191s CPU time.
Jan 31 07:07:34 compute-1 systemd-logind[788]: Session 49 logged out. Waiting for processes to exit.
Jan 31 07:07:34 compute-1 systemd-logind[788]: Removed session 49.
Jan 31 07:07:35 compute-1 nova_compute[221338]: 2026-01-31 07:07:35.006 221342 DEBUG nova.compute.resource_tracker [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:07:35 compute-1 nova_compute[221338]: 2026-01-31 07:07:35.007 221342 DEBUG nova.compute.resource_tracker [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:07:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:35.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:07:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:35.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:07:35 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:35 compute-1 ceph-mon[81728]: pgmap v704: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:35 compute-1 nova_compute[221338]: 2026-01-31 07:07:35.900 221342 INFO nova.scheduler.client.report [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] [req-b3958778-ce06-4449-84cb-6a25c0802710] Created resource provider record via placement API for resource provider with UUID 6c25628b-2484-4cb3-b051-815f7248948f and name compute-1.ctlplane.example.com.
Jan 31 07:07:35 compute-1 nova_compute[221338]: 2026-01-31 07:07:35.926 221342 DEBUG oslo_concurrency.processutils [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:07:36 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:07:36 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2439373444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:07:36 compute-1 nova_compute[221338]: 2026-01-31 07:07:36.354 221342 DEBUG oslo_concurrency.processutils [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:07:36 compute-1 nova_compute[221338]: 2026-01-31 07:07:36.360 221342 DEBUG nova.virt.libvirt.host [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 31 07:07:36 compute-1 nova_compute[221338]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 31 07:07:36 compute-1 nova_compute[221338]: 2026-01-31 07:07:36.360 221342 INFO nova.virt.libvirt.host [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] kernel doesn't support AMD SEV
Jan 31 07:07:36 compute-1 nova_compute[221338]: 2026-01-31 07:07:36.361 221342 DEBUG nova.compute.provider_tree [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Updating inventory in ProviderTree for provider 6c25628b-2484-4cb3-b051-815f7248948f with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:07:36 compute-1 nova_compute[221338]: 2026-01-31 07:07:36.362 221342 DEBUG nova.virt.libvirt.driver [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:07:36 compute-1 nova_compute[221338]: 2026-01-31 07:07:36.364 221342 DEBUG nova.virt.libvirt.driver [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Libvirt baseline CPU <cpu>
Jan 31 07:07:36 compute-1 nova_compute[221338]:   <arch>x86_64</arch>
Jan 31 07:07:36 compute-1 nova_compute[221338]:   <model>Nehalem</model>
Jan 31 07:07:36 compute-1 nova_compute[221338]:   <vendor>AMD</vendor>
Jan 31 07:07:36 compute-1 nova_compute[221338]:   <topology sockets="8" cores="1" threads="1"/>
Jan 31 07:07:36 compute-1 nova_compute[221338]: </cpu>
Jan 31 07:07:36 compute-1 nova_compute[221338]:  _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537
Jan 31 07:07:36 compute-1 nova_compute[221338]: 2026-01-31 07:07:36.461 221342 DEBUG nova.scheduler.client.report [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Updated inventory for provider 6c25628b-2484-4cb3-b051-815f7248948f with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 31 07:07:36 compute-1 nova_compute[221338]: 2026-01-31 07:07:36.462 221342 DEBUG nova.compute.provider_tree [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Updating resource provider 6c25628b-2484-4cb3-b051-815f7248948f generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 31 07:07:36 compute-1 nova_compute[221338]: 2026-01-31 07:07:36.462 221342 DEBUG nova.compute.provider_tree [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Updating inventory in ProviderTree for provider 6c25628b-2484-4cb3-b051-815f7248948f with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:07:36 compute-1 nova_compute[221338]: 2026-01-31 07:07:36.551 221342 DEBUG nova.compute.provider_tree [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Updating resource provider 6c25628b-2484-4cb3-b051-815f7248948f generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 31 07:07:36 compute-1 nova_compute[221338]: 2026-01-31 07:07:36.599 221342 DEBUG nova.compute.resource_tracker [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:07:36 compute-1 nova_compute[221338]: 2026-01-31 07:07:36.599 221342 DEBUG oslo_concurrency.lockutils [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:07:36 compute-1 nova_compute[221338]: 2026-01-31 07:07:36.599 221342 DEBUG nova.service [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 31 07:07:36 compute-1 nova_compute[221338]: 2026-01-31 07:07:36.726 221342 DEBUG nova.service [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 31 07:07:36 compute-1 nova_compute[221338]: 2026-01-31 07:07:36.727 221342 DEBUG nova.servicegroup.drivers.db [None req-d91844a5-f436-4d1b-b8a1-723195b1c52a - - - - - -] DB_Driver: join new ServiceGroup member compute-1.ctlplane.example.com to the compute group, service = <Service: host=compute-1.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 31 07:07:36 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 999 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:07:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:36 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/1432038719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:07:36 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2439373444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:07:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:07:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:37.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:07:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:07:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:37.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:07:37 compute-1 ceph-mon[81728]: pgmap v705: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:37 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:37 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/3888886798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:07:38 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:38 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/2174364683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:07:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:07:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:39.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:07:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:39.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:40 compute-1 ceph-mon[81728]: pgmap v706: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:40 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:41 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:07:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:41.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:07:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:41.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:42 compute-1 ceph-mon[81728]: pgmap v707: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:42 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1004 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:42.154928) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843262154977, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1541, "num_deletes": 257, "total_data_size": 2863818, "memory_usage": 2900800, "flush_reason": "Manual Compaction"}
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843262165719, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 1869877, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19128, "largest_seqno": 20664, "table_properties": {"data_size": 1863833, "index_size": 3055, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15518, "raw_average_key_size": 20, "raw_value_size": 1850333, "raw_average_value_size": 2428, "num_data_blocks": 135, "num_entries": 762, "num_filter_entries": 762, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843149, "oldest_key_time": 1769843149, "file_creation_time": 1769843262, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 10839 microseconds, and 3808 cpu microseconds.
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:42.165770) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 1869877 bytes OK
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:42.165791) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:42.173566) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:42.173601) EVENT_LOG_v1 {"time_micros": 1769843262173593, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:42.173623) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 2856458, prev total WAL file size 2856458, number of live WAL files 2.
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:42.174619) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323533' seq:72057594037927935, type:22 .. '6C6F676D00353036' seq:0, type:0; will stop at (end)
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(1826KB)], [36(6850KB)]
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843262174704, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 8884923, "oldest_snapshot_seqno": -1}
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 5450 keys, 8684624 bytes, temperature: kUnknown
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843262260843, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 8684624, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8649944, "index_size": 19978, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 141232, "raw_average_key_size": 25, "raw_value_size": 8552358, "raw_average_value_size": 1569, "num_data_blocks": 798, "num_entries": 5450, "num_filter_entries": 5450, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842193, "oldest_key_time": 0, "file_creation_time": 1769843262, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:42.261098) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 8684624 bytes
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:42.263125) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.0 rd, 100.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 6.7 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(9.4) write-amplify(4.6) OK, records in: 5977, records dropped: 527 output_compression: NoCompression
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:42.263174) EVENT_LOG_v1 {"time_micros": 1769843262263158, "job": 20, "event": "compaction_finished", "compaction_time_micros": 86237, "compaction_time_cpu_micros": 16332, "output_level": 6, "num_output_files": 1, "total_output_size": 8684624, "num_input_records": 5977, "num_output_records": 5450, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843262263568, "job": 20, "event": "table_file_deletion", "file_number": 38}
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843262264372, "job": 20, "event": "table_file_deletion", "file_number": 36}
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:42.174374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:42.264509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:42.264515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:42.264517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:42.264519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:07:42 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:42.264521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:07:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:43.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:43.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:43 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:44 compute-1 ceph-mon[81728]: pgmap v708: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:45.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:45.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:45 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:45 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:46 compute-1 ceph-mon[81728]: pgmap v709: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:47.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:47.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:47 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:47 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1009 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:07:48 compute-1 ceph-mon[81728]: pgmap v710: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:48 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:49.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:49.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:49 compute-1 ceph-mon[81728]: pgmap v711: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:50 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:07:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:51.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:07:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:07:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:51.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:07:51 compute-1 ceph-mon[81728]: pgmap v712: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:51 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:52 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:52 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1014 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:07:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:53.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:53.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:53 compute-1 ceph-mon[81728]: pgmap v713: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:53 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:55.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:07:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:55.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:07:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:55 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:56 compute-1 ceph-mon[81728]: pgmap v714: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:57.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:57.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:57 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:57 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1019 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:57.322925) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843277322955, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 446, "num_deletes": 251, "total_data_size": 439816, "memory_usage": 449672, "flush_reason": "Manual Compaction"}
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843277327377, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 288495, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20669, "largest_seqno": 21110, "table_properties": {"data_size": 286133, "index_size": 462, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6265, "raw_average_key_size": 19, "raw_value_size": 281262, "raw_average_value_size": 854, "num_data_blocks": 21, "num_entries": 329, "num_filter_entries": 329, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843263, "oldest_key_time": 1769843263, "file_creation_time": 1769843277, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 4496 microseconds, and 1185 cpu microseconds.
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:57.327419) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 288495 bytes OK
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:57.327436) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:57.330391) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:57.330450) EVENT_LOG_v1 {"time_micros": 1769843277330436, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:57.330479) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 437019, prev total WAL file size 437019, number of live WAL files 2.
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:57.331966) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(281KB)], [39(8481KB)]
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843277332004, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 8973119, "oldest_snapshot_seqno": -1}
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 5269 keys, 7229189 bytes, temperature: kUnknown
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843277389989, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 7229189, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7196880, "index_size": 18055, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 138243, "raw_average_key_size": 26, "raw_value_size": 7103476, "raw_average_value_size": 1348, "num_data_blocks": 712, "num_entries": 5269, "num_filter_entries": 5269, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842193, "oldest_key_time": 0, "file_creation_time": 1769843277, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:57.390203) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7229189 bytes
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:57.394475) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.6 rd, 124.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 8.3 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(56.2) write-amplify(25.1) OK, records in: 5779, records dropped: 510 output_compression: NoCompression
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:57.394502) EVENT_LOG_v1 {"time_micros": 1769843277394492, "job": 22, "event": "compaction_finished", "compaction_time_micros": 58052, "compaction_time_cpu_micros": 13256, "output_level": 6, "num_output_files": 1, "total_output_size": 7229189, "num_input_records": 5779, "num_output_records": 5269, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843277394668, "job": 22, "event": "table_file_deletion", "file_number": 41}
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843277395390, "job": 22, "event": "table_file_deletion", "file_number": 39}
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:57.330931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:57.395487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:57.395494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:57.395498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:57.395501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:07:57 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:07:57.395503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:07:58 compute-1 ceph-mon[81728]: pgmap v715: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:07:59.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:07:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:07:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:07:59.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:07:59 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:07:59 compute-1 ceph-mon[81728]: pgmap v716: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:59 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:00 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:01.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:01.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:01 compute-1 ceph-mon[81728]: pgmap v717: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:01 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:02 compute-1 podman[221795]: 2026-01-31 07:08:02.121182495 +0000 UTC m=+0.047279799 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:08:02 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:02 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1024 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:08:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:03.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:03.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:03 compute-1 ceph-mon[81728]: pgmap v718: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:03 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:04 compute-1 podman[221815]: 2026-01-31 07:08:04.161939089 +0000 UTC m=+0.086090857 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 07:08:04 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:05.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:05.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:05 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:05 compute-1 ceph-mon[81728]: pgmap v719: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:05 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:07.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:07.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:07 compute-1 ceph-mon[81728]: pgmap v720: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:07 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1029 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:08:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:07 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/4169574238' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:08:08 compute-1 sudo[221841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:08 compute-1 sudo[221841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:08 compute-1 sudo[221841]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:08 compute-1 sudo[221866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:08 compute-1 sudo[221866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:08 compute-1 sudo[221866]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:08 compute-1 sudo[221891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:08 compute-1 sudo[221891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:08 compute-1 sudo[221891]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:08 compute-1 sudo[221916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:08:08 compute-1 sudo[221916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:08 compute-1 sudo[221916]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:08 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:08 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/4169574238' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:08:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:09.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:09.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:09 compute-1 ceph-mon[81728]: pgmap v721: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:09 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:09 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/986348227' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:08:09 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/986348227' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:08:09 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:08:09 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:08:10 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:10 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:10 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:08:10 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:08:10 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:08:10 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:08:10 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:10 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:10 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/3490018636' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:08:10 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/3490018636' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:08:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:08:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:11.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:08:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:11.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:12 compute-1 ceph-mon[81728]: pgmap v722: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:13.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:13 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1034 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:08:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:08:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:13.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:08:14 compute-1 ceph-mon[81728]: pgmap v723: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:08:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:15.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:08:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:15.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:15 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:15 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:16 compute-1 ceph-mon[81728]: pgmap v724: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:16 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:16 compute-1 sudo[221972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:16 compute-1 sudo[221972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:16 compute-1 sudo[221972]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:16 compute-1 sudo[221997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:08:16 compute-1 sudo[221997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:16 compute-1 sudo[221997]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:17.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:17.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:17 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:17 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:08:17 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:08:17 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1039 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:08:18 compute-1 ceph-mon[81728]: pgmap v725: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:18 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:19.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:19.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:19 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:08:19.883 140083 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:08:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:08:19.883 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:08:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:08:19.883 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:08:20 compute-1 ceph-mon[81728]: pgmap v726: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:20 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:20 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:21.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:21.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:21 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:22 compute-1 ceph-mon[81728]: pgmap v727: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:22 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1044 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:08:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:23.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:08:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:23.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:08:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:24 compute-1 ceph-mon[81728]: pgmap v728: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:25.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:25.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:25 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:25 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:25 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:26 compute-1 ceph-mon[81728]: pgmap v729: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:26 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:27.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:27.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:27 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/1238846270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:08:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:27 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1049 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:08:28 compute-1 ceph-mon[81728]: pgmap v730: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:28 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/394180711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:08:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:29.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:29.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:29 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:30 compute-1 ceph-mon[81728]: pgmap v731: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:30 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:31.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:31.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:31 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:32 compute-1 ceph-mon[81728]: pgmap v732: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:32 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1054 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:08:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:33 compute-1 podman[222022]: 2026-01-31 07:08:33.119835087 +0000 UTC m=+0.043889107 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 07:08:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:33.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:33.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:34 compute-1 ceph-mon[81728]: pgmap v733: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:35.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:35 compute-1 podman[222041]: 2026-01-31 07:08:35.160653402 +0000 UTC m=+0.088625595 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:08:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:35.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:35 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/1747668994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:08:35 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:35 compute-1 nova_compute[221338]: 2026-01-31 07:08:35.729 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:08:35 compute-1 nova_compute[221338]: 2026-01-31 07:08:35.730 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:08:35 compute-1 nova_compute[221338]: 2026-01-31 07:08:35.730 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:08:35 compute-1 nova_compute[221338]: 2026-01-31 07:08:35.730 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:08:35 compute-1 nova_compute[221338]: 2026-01-31 07:08:35.746 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:08:35 compute-1 nova_compute[221338]: 2026-01-31 07:08:35.747 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:08:35 compute-1 nova_compute[221338]: 2026-01-31 07:08:35.748 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:08:35 compute-1 nova_compute[221338]: 2026-01-31 07:08:35.748 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:08:35 compute-1 nova_compute[221338]: 2026-01-31 07:08:35.748 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:08:35 compute-1 nova_compute[221338]: 2026-01-31 07:08:35.748 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:08:35 compute-1 nova_compute[221338]: 2026-01-31 07:08:35.748 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:08:35 compute-1 nova_compute[221338]: 2026-01-31 07:08:35.767 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:08:35 compute-1 nova_compute[221338]: 2026-01-31 07:08:35.767 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:08:35 compute-1 nova_compute[221338]: 2026-01-31 07:08:35.768 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:08:35 compute-1 nova_compute[221338]: 2026-01-31 07:08:35.790 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:08:35 compute-1 nova_compute[221338]: 2026-01-31 07:08:35.791 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:08:35 compute-1 nova_compute[221338]: 2026-01-31 07:08:35.791 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:08:35 compute-1 nova_compute[221338]: 2026-01-31 07:08:35.791 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:08:35 compute-1 nova_compute[221338]: 2026-01-31 07:08:35.792 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:08:36 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:08:36 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2598506739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:08:36 compute-1 nova_compute[221338]: 2026-01-31 07:08:36.218 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:08:36 compute-1 nova_compute[221338]: 2026-01-31 07:08:36.362 221342 WARNING nova.virt.libvirt.driver [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:08:36 compute-1 nova_compute[221338]: 2026-01-31 07:08:36.363 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5353MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:08:36 compute-1 nova_compute[221338]: 2026-01-31 07:08:36.364 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:08:36 compute-1 nova_compute[221338]: 2026-01-31 07:08:36.364 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:08:36 compute-1 nova_compute[221338]: 2026-01-31 07:08:36.444 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:08:36 compute-1 nova_compute[221338]: 2026-01-31 07:08:36.444 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:08:36 compute-1 nova_compute[221338]: 2026-01-31 07:08:36.473 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:08:36 compute-1 ceph-mon[81728]: pgmap v734: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:36 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/323485511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:08:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:36 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2598506739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:08:36 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:08:36 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/15126995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:08:36 compute-1 nova_compute[221338]: 2026-01-31 07:08:36.895 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:08:36 compute-1 nova_compute[221338]: 2026-01-31 07:08:36.900 221342 DEBUG nova.compute.provider_tree [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Inventory has not changed in ProviderTree for provider: 6c25628b-2484-4cb3-b051-815f7248948f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:08:36 compute-1 nova_compute[221338]: 2026-01-31 07:08:36.917 221342 DEBUG nova.scheduler.client.report [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Inventory has not changed for provider 6c25628b-2484-4cb3-b051-815f7248948f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:08:36 compute-1 nova_compute[221338]: 2026-01-31 07:08:36.918 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:08:36 compute-1 nova_compute[221338]: 2026-01-31 07:08:36.919 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:08:36 compute-1 nova_compute[221338]: 2026-01-31 07:08:36.919 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:08:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:37.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:37.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:37 compute-1 ceph-mon[81728]: pgmap v735: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:37 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/15126995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:08:37 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1059 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:08:37 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:38 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:39.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:39.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:39 compute-1 ceph-mon[81728]: pgmap v736: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:39 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:40 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:41.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:08:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:41.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:08:41 compute-1 ceph-mon[81728]: pgmap v737: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:41 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:42 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1064 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:08:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:43.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:43.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:43 compute-1 ceph-mon[81728]: pgmap v738: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:43 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:08:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:45.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:08:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:45.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:45 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:45 compute-1 ceph-mon[81728]: pgmap v739: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:45 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:47.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:08:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:47.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:08:47 compute-1 ceph-mon[81728]: pgmap v740: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:47 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:47 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1069 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:08:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:49.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:49.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:49 compute-1 sshd-session[222111]: Invalid user validator from 2.57.122.238 port 42220
Jan 31 07:08:50 compute-1 sshd-session[222111]: Connection closed by invalid user validator 2.57.122.238 port 42220 [preauth]
Jan 31 07:08:50 compute-1 ceph-mon[81728]: pgmap v741: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:50 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:51.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:08:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:51.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:08:51 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:52 compute-1 ceph-mon[81728]: pgmap v742: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:52 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:52 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1074 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:08:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:53.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:53.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:53 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:54 compute-1 ceph-mon[81728]: pgmap v743: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:54 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:55.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:08:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:55.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:08:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:55 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:56 compute-1 ceph-mon[81728]: pgmap v744: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:57.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:57.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:57 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:57 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:57 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1079 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:08:58 compute-1 ceph-mon[81728]: pgmap v745: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:08:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:08:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:59.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:08:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:08:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:08:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:59.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:08:59 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:00 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:00 compute-1 ceph-mon[81728]: pgmap v746: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:09:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:01.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:09:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:01.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:01 compute-1 ceph-mon[81728]: pgmap v747: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:01 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:02 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:02 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1084 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:09:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:03.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:09:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:03.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:09:03 compute-1 ceph-mon[81728]: pgmap v748: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:03 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:04 compute-1 podman[222113]: 2026-01-31 07:09:04.115631487 +0000 UTC m=+0.042146620 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127)
Jan 31 07:09:04 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:05.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:05.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:05 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:05 compute-1 ceph-mon[81728]: pgmap v749: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:05 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:06 compute-1 podman[222132]: 2026-01-31 07:09:06.125463024 +0000 UTC m=+0.052731885 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 31 07:09:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:07.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:09:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:07.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:09:07 compute-1 ceph-mon[81728]: pgmap v750: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:07 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1089 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:09:09 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:09.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:09.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:09 compute-1 ceph-osd[79145]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:09:09 compute-1 ceph-osd[79145]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 6383 writes, 26K keys, 6383 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6383 writes, 1107 syncs, 5.77 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 495 writes, 762 keys, 495 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s
                                           Interval WAL: 495 writes, 240 syncs, 2.06 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 07:09:10 compute-1 ceph-mon[81728]: pgmap v751: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:10 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:10 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:11 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:11.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:11.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:12 compute-1 ceph-mon[81728]: pgmap v752: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:13 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1094 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:09:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:13.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:13.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:14 compute-1 ceph-mon[81728]: pgmap v753: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:15 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:15.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:15.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:15 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:16 compute-1 ceph-mon[81728]: pgmap v754: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:16 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:17 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:17 compute-1 sudo[222160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:09:17 compute-1 sudo[222160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:17 compute-1 sudo[222160]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:09:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:17.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:09:17 compute-1 sudo[222185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:09:17 compute-1 sudo[222185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:17 compute-1 sudo[222185]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:17.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:17 compute-1 sudo[222210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:09:17 compute-1 sudo[222210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:17 compute-1 sudo[222210]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:17 compute-1 sudo[222235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 07:09:17 compute-1 sudo[222235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:17 compute-1 sudo[222235]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:17 compute-1 sudo[222280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:09:17 compute-1 sudo[222280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:17 compute-1 sudo[222280]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:17 compute-1 sudo[222305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:09:17 compute-1 sudo[222305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:17 compute-1 sudo[222305]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:17 compute-1 sudo[222330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:09:17 compute-1 sudo[222330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:17 compute-1 sudo[222330]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:17 compute-1 sudo[222355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:09:17 compute-1 sudo[222355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:18 compute-1 sudo[222355]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:18 compute-1 ceph-mon[81728]: pgmap v755: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:18 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:18 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1099 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:09:18 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:09:18 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:09:18 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:09:18 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:09:19 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:19 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:09:19 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:09:19 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:09:19 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:09:19 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:09:19 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:09:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:19.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:19.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:09:19.884 140083 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:09:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:09:19.884 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:09:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:09:19.884 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:09:20 compute-1 ceph-mon[81728]: pgmap v756: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:20 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:20 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:21.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:21.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:21 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:22 compute-1 ceph-mon[81728]: pgmap v757: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:22 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1104 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:09:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:23.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:23.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:23 compute-1 sudo[222410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:09:23 compute-1 sudo[222410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:23 compute-1 sudo[222410]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:23 compute-1 sudo[222435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:09:23 compute-1 sudo[222435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:23 compute-1 sudo[222435]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:24 compute-1 ceph-mon[81728]: pgmap v758: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:24 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:09:24 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:09:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:25.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:25.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:25 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:25 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:26 compute-1 ceph-mon[81728]: pgmap v759: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:26 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.159 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.160 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:09:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:27.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.197 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.197 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.197 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.198 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.198 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.198 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.198 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.198 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.229 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.230 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.230 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.230 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.230 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:09:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:27.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:27 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:09:27 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/974135017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:09:27 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1109 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:09:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.642 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.774 221342 WARNING nova.virt.libvirt.driver [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.776 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5317MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.776 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.776 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.848 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.849 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:09:27 compute-1 nova_compute[221338]: 2026-01-31 07:09:27.864 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:09:28 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:09:28 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4046323715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:09:28 compute-1 nova_compute[221338]: 2026-01-31 07:09:28.272 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:09:28 compute-1 nova_compute[221338]: 2026-01-31 07:09:28.276 221342 DEBUG nova.compute.provider_tree [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Inventory has not changed in ProviderTree for provider: 6c25628b-2484-4cb3-b051-815f7248948f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:09:28 compute-1 nova_compute[221338]: 2026-01-31 07:09:28.294 221342 DEBUG nova.scheduler.client.report [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Inventory has not changed for provider 6c25628b-2484-4cb3-b051-815f7248948f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:09:28 compute-1 nova_compute[221338]: 2026-01-31 07:09:28.295 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:09:28 compute-1 nova_compute[221338]: 2026-01-31 07:09:28.296 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:09:28 compute-1 ceph-mon[81728]: pgmap v760: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:28 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/974135017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:09:28 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/949506148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:09:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:28 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/4046323715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:09:29 compute-1 nova_compute[221338]: 2026-01-31 07:09:29.072 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:09:29 compute-1 nova_compute[221338]: 2026-01-31 07:09:29.073 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:09:29 compute-1 nova_compute[221338]: 2026-01-31 07:09:29.073 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:09:29 compute-1 nova_compute[221338]: 2026-01-31 07:09:29.092 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:09:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:29.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:29.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:29 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/2886384482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:09:29 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:30 compute-1 ceph-mon[81728]: pgmap v761: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:30 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:09:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:31.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:09:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:31.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:31 compute-1 ceph-mon[81728]: pgmap v762: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:31 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:32 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:09:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:33.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:33.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:33 compute-1 ceph-mon[81728]: pgmap v763: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:34 compute-1 ceph-mon[81728]: pgmap v764: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:35 compute-1 podman[222504]: 2026-01-31 07:09:35.129621271 +0000 UTC m=+0.044068142 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:09:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:35.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:35.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:35 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:36 compute-1 ceph-mon[81728]: pgmap v765: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:37 compute-1 podman[222522]: 2026-01-31 07:09:37.150037275 +0000 UTC m=+0.074967776 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 07:09:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:09:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:37.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:09:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:09:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:37.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:09:37 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/1918497747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:09:37 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:09:37 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:37 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/794741233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:09:38 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:38 compute-1 ceph-mon[81728]: pgmap v766: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:39.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:39.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:39 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:40 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:40 compute-1 ceph-mon[81728]: pgmap v767: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:41.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:41.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:41 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:42 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1124 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:09:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:42 compute-1 ceph-mon[81728]: pgmap v768: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:43.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:43.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:43 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:45 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:45 compute-1 ceph-mon[81728]: pgmap v769: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:45.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:45.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:45 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:47 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:47 compute-1 ceph-mon[81728]: pgmap v770: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:47.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:09:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:47.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:09:48 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:48 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1129 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:09:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:49 compute-1 ceph-mon[81728]: pgmap v771: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:49.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:49.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:50 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:09:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:51.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:09:51 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:51 compute-1 ceph-mon[81728]: pgmap v772: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:51.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:52 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:52 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1134 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:09:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:53.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:53 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:53 compute-1 ceph-mon[81728]: pgmap v773: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:09:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:53.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:09:53 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:09:53 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4033 writes, 22K keys, 4033 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s
                                           Cumulative WAL: 4033 writes, 4033 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1788 writes, 9211 keys, 1788 commit groups, 1.0 writes per commit group, ingest: 15.66 MB, 0.03 MB/s
                                           Interval WAL: 1788 writes, 1788 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     58.5      0.38              0.05        11    0.035       0      0       0.0       0.0
                                             L6      1/0    6.89 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.7     91.4     77.0      1.08              0.16        10    0.108     53K   5382       0.0       0.0
                                            Sum      1/0    6.89 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.7     67.5     72.2      1.46              0.21        21    0.070     53K   5382       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.7     83.8     83.3      0.72              0.12        12    0.060     35K   3592       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     91.4     77.0      1.08              0.16        10    0.108     53K   5382       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     60.4      0.37              0.05        10    0.037       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.022, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.09 MB/s write, 0.10 GB read, 0.08 MB/s read, 1.5 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c9bf1831f0#2 capacity: 308.00 MB usage: 6.89 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 9.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(357,6.47 MB,2.10188%) FilterBlock(21,158.36 KB,0.0502103%) IndexBlock(21,265.39 KB,0.0841463%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 07:09:54 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:55.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:55 compute-1 ceph-mon[81728]: pgmap v774: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:55.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:55 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:09:55.453 140083 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'c2:21:78', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '1a:5e:fd:5b:c6:c6'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:09:55 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:09:55.454 140083 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:09:55 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:09:55.455 140083 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3f1b6d5d-330e-4693-ab86-ea25a99a46d7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:09:55 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:09:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:57.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:09:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:09:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:57.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:09:57 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:57 compute-1 ceph-mon[81728]: pgmap v775: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:57 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1139 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:09:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:59.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:09:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:59.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:59 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:09:59 compute-1 ceph-mon[81728]: pgmap v776: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:00 compute-1 ceph-mon[81728]: Health detail: HEALTH_WARN 1 slow ops, oldest one blocked for 1139 sec, osd.2 has slow ops
Jan 31 07:10:00 compute-1 ceph-mon[81728]: [WRN] SLOW_OPS: 1 slow ops, oldest one blocked for 1139 sec, osd.2 has slow ops
Jan 31 07:10:00 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:01 compute-1 anacron[7266]: Job `cron.monthly' started
Jan 31 07:10:01 compute-1 anacron[7266]: Job `cron.monthly' terminated
Jan 31 07:10:01 compute-1 anacron[7266]: Normal exit (3 jobs run)
Jan 31 07:10:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:01.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:01.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:01 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:01 compute-1 ceph-mon[81728]: pgmap v777: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:02 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:02 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/1626326427' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:10:02 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/1626326427' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:10:02 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1144 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:10:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:03.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:03.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:03 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:03 compute-1 ceph-mon[81728]: pgmap v778: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:04 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:10:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:05.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:10:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:05.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:05 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:05 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:05 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:06 compute-1 podman[222550]: 2026-01-31 07:10:06.120569469 +0000 UTC m=+0.042520289 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 31 07:10:06 compute-1 ceph-mon[81728]: pgmap v779: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:07.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:10:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:07.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:10:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:07 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1149 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:10:08 compute-1 podman[222567]: 2026-01-31 07:10:08.137983719 +0000 UTC m=+0.060528795 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:10:08 compute-1 ceph-mon[81728]: pgmap v780: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:08 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:09.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:09.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:09 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:10 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:10 compute-1 ceph-mon[81728]: pgmap v781: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:10 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:11.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:11.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:11 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:12 compute-1 ceph-mon[81728]: pgmap v782: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:12 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1154 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:10:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:13.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:13.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:14 compute-1 ceph-mon[81728]: pgmap v783: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:15.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:15.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:15 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:15 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:16 compute-1 ceph-mon[81728]: pgmap v784: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:16 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:17.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:10:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:17.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:10:17 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1159 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:10:17 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:18 compute-1 ceph-mon[81728]: pgmap v785: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:18 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:10:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:19.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:10:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:19.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:19 compute-1 ceph-mon[81728]: pgmap v786: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:19 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:10:19.885 140083 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:10:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:10:19.885 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:10:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:10:19.886 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:10:20 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:20 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:21.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:10:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:21.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:10:21 compute-1 ceph-mon[81728]: pgmap v787: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:21 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:22 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:10:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:23.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:23.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:23 compute-1 ceph-mon[81728]: pgmap v788: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:24 compute-1 sudo[222595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:24 compute-1 sudo[222595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:24 compute-1 sudo[222595]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:24 compute-1 sudo[222620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:10:24 compute-1 sudo[222620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:24 compute-1 sudo[222620]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:24 compute-1 sudo[222645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:24 compute-1 sudo[222645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:24 compute-1 sudo[222645]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:24 compute-1 sudo[222670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:10:24 compute-1 sudo[222670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:24 compute-1 sshd-session[222593]: Received disconnect from 45.148.10.147 port 27194:11:  [preauth]
Jan 31 07:10:24 compute-1 sshd-session[222593]: Disconnected from authenticating user root 45.148.10.147 port 27194 [preauth]
Jan 31 07:10:24 compute-1 sudo[222670]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:25.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:25.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:25 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:25 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:10:25 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:10:25 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:10:25 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:10:25 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:10:25 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:10:25 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:10:25 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:10:25 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:10:25 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:10:25 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:25 compute-1 nova_compute[221338]: 2026-01-31 07:10:25.973 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:10:25 compute-1 nova_compute[221338]: 2026-01-31 07:10:25.975 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:10:25 compute-1 nova_compute[221338]: 2026-01-31 07:10:25.975 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:10:25 compute-1 nova_compute[221338]: 2026-01-31 07:10:25.975 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:10:26 compute-1 ceph-mon[81728]: pgmap v789: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:26 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:26 compute-1 nova_compute[221338]: 2026-01-31 07:10:26.974 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:10:26 compute-1 nova_compute[221338]: 2026-01-31 07:10:26.975 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:10:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:27.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:27.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:27 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1169 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:10:27 compute-1 nova_compute[221338]: 2026-01-31 07:10:27.974 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:10:27 compute-1 nova_compute[221338]: 2026-01-31 07:10:27.974 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:10:27 compute-1 nova_compute[221338]: 2026-01-31 07:10:27.974 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:10:27 compute-1 nova_compute[221338]: 2026-01-31 07:10:27.990 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:10:27 compute-1 nova_compute[221338]: 2026-01-31 07:10:27.990 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:10:28 compute-1 ceph-mon[81728]: pgmap v790: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:28 compute-1 nova_compute[221338]: 2026-01-31 07:10:28.974 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:10:28 compute-1 nova_compute[221338]: 2026-01-31 07:10:28.974 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:10:29 compute-1 nova_compute[221338]: 2026-01-31 07:10:29.020 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:10:29 compute-1 nova_compute[221338]: 2026-01-31 07:10:29.020 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:10:29 compute-1 nova_compute[221338]: 2026-01-31 07:10:29.020 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:10:29 compute-1 nova_compute[221338]: 2026-01-31 07:10:29.021 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:10:29 compute-1 nova_compute[221338]: 2026-01-31 07:10:29.021 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:10:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:29.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:29.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:29 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:10:29 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/964035259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:10:29 compute-1 nova_compute[221338]: 2026-01-31 07:10:29.443 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:10:29 compute-1 nova_compute[221338]: 2026-01-31 07:10:29.607 221342 WARNING nova.virt.libvirt.driver [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:10:29 compute-1 nova_compute[221338]: 2026-01-31 07:10:29.609 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5374MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:10:29 compute-1 nova_compute[221338]: 2026-01-31 07:10:29.609 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:10:29 compute-1 nova_compute[221338]: 2026-01-31 07:10:29.610 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:10:29 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:29 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/964035259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:10:29 compute-1 nova_compute[221338]: 2026-01-31 07:10:29.691 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:10:29 compute-1 nova_compute[221338]: 2026-01-31 07:10:29.692 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:10:29 compute-1 nova_compute[221338]: 2026-01-31 07:10:29.711 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:10:30 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:10:30 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1706991789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:10:30 compute-1 nova_compute[221338]: 2026-01-31 07:10:30.119 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:10:30 compute-1 nova_compute[221338]: 2026-01-31 07:10:30.124 221342 DEBUG nova.compute.provider_tree [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Inventory has not changed in ProviderTree for provider: 6c25628b-2484-4cb3-b051-815f7248948f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:10:30 compute-1 nova_compute[221338]: 2026-01-31 07:10:30.149 221342 DEBUG nova.scheduler.client.report [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Inventory has not changed for provider 6c25628b-2484-4cb3-b051-815f7248948f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:10:30 compute-1 nova_compute[221338]: 2026-01-31 07:10:30.151 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:10:30 compute-1 nova_compute[221338]: 2026-01-31 07:10:30.151 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:10:30 compute-1 sudo[222769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:30 compute-1 sudo[222769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:30 compute-1 sudo[222769]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:30 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:30 compute-1 sudo[222794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:10:30 compute-1 sudo[222794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:30 compute-1 sudo[222794]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:30 compute-1 ceph-mon[81728]: pgmap v791: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:30 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/3873686118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:10:30 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/1706991789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:10:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:30 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/2003422515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:10:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:10:30 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:10:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:31.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:10:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:31.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:10:31 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:32 compute-1 ceph-mon[81728]: pgmap v792: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:32 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:32 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1174 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:10:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:33.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:33.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:34 compute-1 ceph-mon[81728]: pgmap v793: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:35.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:35.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:35 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:35 compute-1 ceph-mon[81728]: pgmap v794: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:10:36.890732) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843436890802, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 2469, "num_deletes": 251, "total_data_size": 4808011, "memory_usage": 4895760, "flush_reason": "Manual Compaction"}
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843436904487, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 3114768, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21115, "largest_seqno": 23579, "table_properties": {"data_size": 3105633, "index_size": 5245, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23625, "raw_average_key_size": 21, "raw_value_size": 3085281, "raw_average_value_size": 2792, "num_data_blocks": 228, "num_entries": 1105, "num_filter_entries": 1105, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843278, "oldest_key_time": 1769843278, "file_creation_time": 1769843436, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 13787 microseconds, and 4716 cpu microseconds.
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:10:36.904527) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 3114768 bytes OK
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:10:36.904543) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:10:36.910923) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:10:36.910974) EVENT_LOG_v1 {"time_micros": 1769843436910963, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:10:36.910998) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 4796789, prev total WAL file size 4796789, number of live WAL files 2.
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:10:36.911955) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(3041KB)], [42(7059KB)]
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843436912035, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 10343957, "oldest_snapshot_seqno": -1}
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5855 keys, 8665032 bytes, temperature: kUnknown
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843436949465, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 8665032, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8628287, "index_size": 21019, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 152190, "raw_average_key_size": 25, "raw_value_size": 8523817, "raw_average_value_size": 1455, "num_data_blocks": 838, "num_entries": 5855, "num_filter_entries": 5855, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842193, "oldest_key_time": 0, "file_creation_time": 1769843436, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:10:36.949753) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 8665032 bytes
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:10:36.956604) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 275.4 rd, 230.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 6.9 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 6374, records dropped: 519 output_compression: NoCompression
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:10:36.956636) EVENT_LOG_v1 {"time_micros": 1769843436956623, "job": 24, "event": "compaction_finished", "compaction_time_micros": 37558, "compaction_time_cpu_micros": 14116, "output_level": 6, "num_output_files": 1, "total_output_size": 8665032, "num_input_records": 6374, "num_output_records": 5855, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843436957047, "job": 24, "event": "table_file_deletion", "file_number": 44}
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843436957726, "job": 24, "event": "table_file_deletion", "file_number": 42}
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:10:36.911805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:10:36.957882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:10:36.957894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:10:36.957896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:10:36.957898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:10:36 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:10:36.957900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:10:37 compute-1 podman[222819]: 2026-01-31 07:10:37.120723177 +0000 UTC m=+0.047190855 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:10:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:37.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:37.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:37 compute-1 ceph-mon[81728]: pgmap v795: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:37 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:37 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1179 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:10:37 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/2623330012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:10:38 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/713155213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:10:38 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:39 compute-1 podman[222839]: 2026-01-31 07:10:39.177608933 +0000 UTC m=+0.106485357 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 07:10:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:39.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:10:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:39.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:10:40 compute-1 ceph-mon[81728]: pgmap v796: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:40 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:41 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:41.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:41.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:42 compute-1 ceph-mon[81728]: pgmap v797: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:43 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1184 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:10:43 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:43.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:43.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:44 compute-1 ceph-mon[81728]: pgmap v798: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:45 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:45.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:45.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:45 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:46 compute-1 ceph-mon[81728]: pgmap v799: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:10:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:47.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:10:47 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:47 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1189 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:10:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:10:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:47.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:10:48 compute-1 ceph-mon[81728]: pgmap v800: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:48 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:10:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:49.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:10:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:49.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:49 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:50 compute-1 ceph-mon[81728]: pgmap v801: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:50 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:51.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:51.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:51 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:52 compute-1 ceph-mon[81728]: pgmap v802: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:52 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:52 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:52 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1194 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:10:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:53.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:53.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:53 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:54 compute-1 ceph-mon[81728]: pgmap v803: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:54 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:55.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:55.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:55 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:56 compute-1 ceph-mon[81728]: pgmap v804: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:57.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:57.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:58 compute-1 ceph-mon[81728]: pgmap v805: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:10:58 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1199 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:10:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:59.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:10:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:59.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:59 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:00 compute-1 ceph-mon[81728]: pgmap v806: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:00 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:00 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:01.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:01.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:01 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:11:01 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3499932331' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:11:01 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:11:01 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3499932331' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:11:01 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:01 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/3499932331' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:11:01 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/3499932331' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:11:02 compute-1 ceph-mon[81728]: pgmap v807: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:02 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:02 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1204 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:11:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:03.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:03.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:03 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:04 compute-1 ceph-mon[81728]: pgmap v808: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:04 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:05.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:05.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:05 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:05 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:06 compute-1 ceph-mon[81728]: pgmap v809: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:06 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:07.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:07.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:07 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:07 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1209 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:11:08 compute-1 podman[222866]: 2026-01-31 07:11:08.125692956 +0000 UTC m=+0.048905268 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:11:08 compute-1 ceph-mon[81728]: pgmap v810: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:08 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:09 compute-1 radosgw[83730]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 31 07:11:09 compute-1 radosgw[83730]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 31 07:11:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:09.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:09.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:09 compute-1 radosgw[83730]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 31 07:11:09 compute-1 radosgw[83730]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 31 07:11:09 compute-1 radosgw[83730]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 31 07:11:09 compute-1 radosgw[83730]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 31 07:11:09 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:09 compute-1 radosgw[83730]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 31 07:11:10 compute-1 podman[222886]: 2026-01-31 07:11:10.141548647 +0000 UTC m=+0.063228898 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 07:11:10 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:10 compute-1 ceph-mon[81728]: pgmap v811: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:10 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:11.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:11.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:12 compute-1 ceph-mon[81728]: pgmap v812: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Jan 31 07:11:12 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:12 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1214 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:11:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:13.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:13 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:13.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:14 compute-1 ceph-mon[81728]: pgmap v813: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Jan 31 07:11:14 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:11:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:15.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:11:15 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:15.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:15 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:16 compute-1 ceph-mon[81728]: pgmap v814: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Jan 31 07:11:16 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:17.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:17.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:17 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:17 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1219 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:11:18 compute-1 ceph-mon[81728]: pgmap v815: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Jan 31 07:11:18 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:11:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:19.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:11:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:19.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:19 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:11:19.885 140083 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:11:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:11:19.886 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:11:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:11:19.886 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:11:20 compute-1 ceph-mon[81728]: pgmap v816: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 80 KiB/s rd, 0 B/s wr, 133 op/s
Jan 31 07:11:20 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:20 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:21.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:11:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:21.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:11:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:11:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - - [31/Jan/2026:07:11:21.505 +0000] "GET /swift/info HTTP/1.1" 200 509 - "python-urllib3/1.26.5" - latency=0.001000026s
Jan 31 07:11:21 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:22 compute-1 ceph-mon[81728]: pgmap v817: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 80 KiB/s rd, 0 B/s wr, 133 op/s
Jan 31 07:11:22 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:22 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1224 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:11:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:23.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:23.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:23 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:24 compute-1 ceph-mon[81728]: pgmap v818: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 80 KiB/s rd, 0 B/s wr, 132 op/s
Jan 31 07:11:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:24 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:25.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:11:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:25.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:11:25 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:25 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:26 compute-1 ceph-mon[81728]: pgmap v819: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 80 KiB/s rd, 0 B/s wr, 132 op/s
Jan 31 07:11:26 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:26 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Jan 31 07:11:27 compute-1 nova_compute[221338]: 2026-01-31 07:11:27.145 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:11:27 compute-1 nova_compute[221338]: 2026-01-31 07:11:27.164 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:11:27 compute-1 nova_compute[221338]: 2026-01-31 07:11:27.164 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:11:27 compute-1 nova_compute[221338]: 2026-01-31 07:11:27.164 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:11:27 compute-1 nova_compute[221338]: 2026-01-31 07:11:27.165 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:11:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:27.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 31 07:11:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:27.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 31 07:11:27 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Jan 31 07:11:27 compute-1 ceph-mon[81728]: osdmap e119: 3 total, 3 up, 3 in
Jan 31 07:11:27 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:27 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1229 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:11:27 compute-1 nova_compute[221338]: 2026-01-31 07:11:27.974 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:11:27 compute-1 nova_compute[221338]: 2026-01-31 07:11:27.975 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:11:27 compute-1 nova_compute[221338]: 2026-01-31 07:11:27.975 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:11:27 compute-1 nova_compute[221338]: 2026-01-31 07:11:27.992 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:11:27 compute-1 nova_compute[221338]: 2026-01-31 07:11:27.992 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:11:27 compute-1 nova_compute[221338]: 2026-01-31 07:11:27.993 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:11:27 compute-1 nova_compute[221338]: 2026-01-31 07:11:27.993 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:11:28 compute-1 ceph-mon[81728]: pgmap v821: 321 pgs: 1 active+clean+laggy, 320 active+clean; 457 KiB data, 152 MiB used, 21 GiB / 21 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 07:11:28 compute-1 ceph-mon[81728]: osdmap e120: 3 total, 3 up, 3 in
Jan 31 07:11:28 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:29.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:11:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:29.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:11:29 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:29 compute-1 nova_compute[221338]: 2026-01-31 07:11:29.974 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:11:30 compute-1 nova_compute[221338]: 2026-01-31 07:11:30.007 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:11:30 compute-1 nova_compute[221338]: 2026-01-31 07:11:30.007 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:11:30 compute-1 nova_compute[221338]: 2026-01-31 07:11:30.007 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:11:30 compute-1 nova_compute[221338]: 2026-01-31 07:11:30.008 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:11:30 compute-1 nova_compute[221338]: 2026-01-31 07:11:30.008 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:11:30 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:11:30 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3664829864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:11:30 compute-1 nova_compute[221338]: 2026-01-31 07:11:30.442 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:11:30 compute-1 nova_compute[221338]: 2026-01-31 07:11:30.584 221342 WARNING nova.virt.libvirt.driver [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:11:30 compute-1 nova_compute[221338]: 2026-01-31 07:11:30.585 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5359MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:11:30 compute-1 nova_compute[221338]: 2026-01-31 07:11:30.585 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:11:30 compute-1 nova_compute[221338]: 2026-01-31 07:11:30.585 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:11:30 compute-1 nova_compute[221338]: 2026-01-31 07:11:30.656 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:11:30 compute-1 nova_compute[221338]: 2026-01-31 07:11:30.657 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:11:30 compute-1 nova_compute[221338]: 2026-01-31 07:11:30.678 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:11:30 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:30 compute-1 ceph-mon[81728]: pgmap v823: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 152 MiB used, 21 GiB / 21 GiB avail; 6.0 KiB/s rd, 1.0 MiB/s wr, 9 op/s
Jan 31 07:11:30 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:30 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/3664829864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:11:30 compute-1 sudo[222956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:30 compute-1 sudo[222956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:30 compute-1 sudo[222956]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:30 compute-1 sudo[222981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:11:30 compute-1 sudo[222981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:30 compute-1 sudo[222981]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:31 compute-1 sudo[223006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:31 compute-1 sudo[223006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:31 compute-1 sudo[223006]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:31 compute-1 sudo[223031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:11:31 compute-1 sudo[223031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:31 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:11:31 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/543715255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:11:31 compute-1 nova_compute[221338]: 2026-01-31 07:11:31.113 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:11:31 compute-1 nova_compute[221338]: 2026-01-31 07:11:31.118 221342 DEBUG nova.compute.provider_tree [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Inventory has not changed in ProviderTree for provider: 6c25628b-2484-4cb3-b051-815f7248948f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:11:31 compute-1 nova_compute[221338]: 2026-01-31 07:11:31.141 221342 DEBUG nova.scheduler.client.report [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Inventory has not changed for provider 6c25628b-2484-4cb3-b051-815f7248948f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:11:31 compute-1 nova_compute[221338]: 2026-01-31 07:11:31.143 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:11:31 compute-1 nova_compute[221338]: 2026-01-31 07:11:31.144 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:11:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:31.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:31 compute-1 sudo[223031]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:31.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:31 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:31 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/543715255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:11:31 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 07:11:31 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:11:31 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:11:31 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:11:31 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:11:31 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:11:31 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:11:31 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/754813705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:11:32 compute-1 nova_compute[221338]: 2026-01-31 07:11:32.138 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:11:32 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Jan 31 07:11:33 compute-1 ceph-mon[81728]: pgmap v824: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 152 MiB used, 21 GiB / 21 GiB avail; 6.4 KiB/s rd, 1.0 MiB/s wr, 9 op/s
Jan 31 07:11:33 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:33 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1234 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:11:33 compute-1 ceph-mon[81728]: osdmap e121: 3 total, 3 up, 3 in
Jan 31 07:11:33 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/4141387167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:11:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:11:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:33.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:11:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:11:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:33.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:11:34 compute-1 ceph-mon[81728]: pgmap v826: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 152 MiB used, 21 GiB / 21 GiB avail; 8.3 KiB/s rd, 1.3 MiB/s wr, 12 op/s
Jan 31 07:11:34 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:35.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:35.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:35 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:35 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:36 compute-1 ceph-mon[81728]: pgmap v827: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail; 9.1 KiB/s rd, 1.0 MiB/s wr, 13 op/s
Jan 31 07:11:36 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:37 compute-1 sudo[223089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:37 compute-1 sudo[223089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:37 compute-1 sudo[223089]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:37 compute-1 sudo[223114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:11:37 compute-1 sudo[223114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:37 compute-1 sudo[223114]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:37.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:37.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:37 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:37 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:11:37 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:11:37 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/1046354474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:11:37 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1239 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:11:38 compute-1 ceph-mon[81728]: pgmap v828: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail; 8.0 KiB/s rd, 895 KiB/s wr, 12 op/s
Jan 31 07:11:38 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:38 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/3663682457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:11:39 compute-1 podman[223139]: 2026-01-31 07:11:39.113017354 +0000 UTC m=+0.039809546 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 07:11:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:39.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:39.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:39 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:40 compute-1 ceph-mon[81728]: pgmap v829: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 614 B/s wr, 3 op/s
Jan 31 07:11:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:40 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:40 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:41 compute-1 podman[223158]: 2026-01-31 07:11:41.145815026 +0000 UTC m=+0.073632794 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:11:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:41.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:41.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:41 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:42 compute-1 ceph-mon[81728]: pgmap v830: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 409 B/s wr, 2 op/s
Jan 31 07:11:42 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:42 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1244 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:11:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:43.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:11:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:43.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:11:43 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:44 compute-1 ceph-mon[81728]: pgmap v831: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s rd, 392 B/s wr, 2 op/s
Jan 31 07:11:44 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:45.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:11:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:45.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:11:45 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:45 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:46 compute-1 ceph-mon[81728]: pgmap v832: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail; 1.8 KiB/s rd, 341 B/s wr, 2 op/s
Jan 31 07:11:46 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:47.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:47.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:47 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:47 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1249 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:11:48 compute-1 ceph-mon[81728]: pgmap v833: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:48 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:49.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:11:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:49.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:11:50 compute-1 ceph-mon[81728]: pgmap v834: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:50 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:50 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:51 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:51.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:51.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:52 compute-1 ceph-mon[81728]: pgmap v835: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:52 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:53 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:53 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1254 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:11:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:53.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:53.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:54 compute-1 ceph-mon[81728]: pgmap v836: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:54 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:55 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:55.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:55.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:55 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:56 compute-1 ceph-mon[81728]: pgmap v837: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:56 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:57 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:57.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:57.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:58 compute-1 ceph-mon[81728]: pgmap v838: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:58 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:58 compute-1 ceph-mon[81728]: Health check update: 1 slow ops, oldest one blocked for 1259 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:11:58 compute-1 sshd-session[223185]: Invalid user sol from 2.57.122.238 port 52178
Jan 31 07:11:59 compute-1 sshd-session[223185]: Connection closed by invalid user sol 2.57.122.238 port 52178 [preauth]
Jan 31 07:11:59 compute-1 ceph-mon[81728]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'images' : 1 ])
Jan 31 07:11:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:11:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:59.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:11:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:11:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:11:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:59.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:00 compute-1 ceph-mon[81728]: pgmap v839: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:00 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:00 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:01.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:01 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:01 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:12:01 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3392015497' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:12:01 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:12:01 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3392015497' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:12:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:01.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:02 compute-1 ceph-mon[81728]: pgmap v840: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:02 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:02 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/3392015497' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:12:02 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/3392015497' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:12:02 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1264 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:12:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:03.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:03.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:03 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:03 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:04 compute-1 ceph-mon[81728]: pgmap v841: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:04 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:05.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:05.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:05 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:06 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:07 compute-1 ceph-mon[81728]: pgmap v842: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:07 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:12:07.239191) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843527239223, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 1466, "num_deletes": 258, "total_data_size": 2580136, "memory_usage": 2622128, "flush_reason": "Manual Compaction"}
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843527250998, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 1694251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23584, "largest_seqno": 25045, "table_properties": {"data_size": 1688438, "index_size": 2889, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14797, "raw_average_key_size": 20, "raw_value_size": 1675432, "raw_average_value_size": 2295, "num_data_blocks": 127, "num_entries": 730, "num_filter_entries": 730, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843437, "oldest_key_time": 1769843437, "file_creation_time": 1769843527, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 11855 microseconds, and 2961 cpu microseconds.
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:12:07.251047) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 1694251 bytes OK
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:12:07.251063) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:12:07.256318) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:12:07.256373) EVENT_LOG_v1 {"time_micros": 1769843527256362, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:12:07.256401) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2573072, prev total WAL file size 2573072, number of live WAL files 2.
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:12:07.257071) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353035' seq:72057594037927935, type:22 .. '6C6F676D00373539' seq:0, type:0; will stop at (end)
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(1654KB)], [45(8461KB)]
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843527257132, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 10359283, "oldest_snapshot_seqno": -1}
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 6052 keys, 10199779 bytes, temperature: kUnknown
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843527347693, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 10199779, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10160266, "index_size": 23275, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15173, "raw_key_size": 158148, "raw_average_key_size": 26, "raw_value_size": 10050825, "raw_average_value_size": 1660, "num_data_blocks": 929, "num_entries": 6052, "num_filter_entries": 6052, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842193, "oldest_key_time": 0, "file_creation_time": 1769843527, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:12:07.347978) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 10199779 bytes
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:12:07.349310) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.2 rd, 112.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.3 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(12.1) write-amplify(6.0) OK, records in: 6585, records dropped: 533 output_compression: NoCompression
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:12:07.349327) EVENT_LOG_v1 {"time_micros": 1769843527349318, "job": 26, "event": "compaction_finished", "compaction_time_micros": 90674, "compaction_time_cpu_micros": 16345, "output_level": 6, "num_output_files": 1, "total_output_size": 10199779, "num_input_records": 6585, "num_output_records": 6052, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843527349690, "job": 26, "event": "table_file_deletion", "file_number": 47}
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843527350495, "job": 26, "event": "table_file_deletion", "file_number": 45}
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:12:07.256965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:12:07.350529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:12:07.350532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:12:07.350533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:12:07.350535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:12:07 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:12:07.350536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:12:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:12:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:07.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:12:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:07.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:07 compute-1 ceph-mon[81728]: pgmap v843: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:07 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:07 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1269 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:12:09 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:12:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:09.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:12:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:09.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:10 compute-1 ceph-mon[81728]: pgmap v844: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:10 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:10 compute-1 podman[223187]: 2026-01-31 07:12:10.14305905 +0000 UTC m=+0.068055746 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:12:10 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:11 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:11.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:11.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:12 compute-1 ceph-mon[81728]: pgmap v845: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:12 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:12 compute-1 podman[223207]: 2026-01-31 07:12:12.150732173 +0000 UTC m=+0.073371646 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 07:12:13 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:13 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1274 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:12:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:13.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:13.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:14 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:14 compute-1 ceph-mon[81728]: pgmap v846: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:15 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:15.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:15.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:15 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:16 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:16 compute-1 ceph-mon[81728]: pgmap v847: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:17 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:12:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:17.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:12:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:12:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:17.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:12:18 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:18 compute-1 ceph-mon[81728]: pgmap v848: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:18 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1279 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:12:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:19.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:19 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:19.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:12:19.887 140083 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:12:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:12:19.887 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:12:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:12:19.888 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:12:20 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:20 compute-1 ceph-mon[81728]: pgmap v849: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:20 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:21.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:21.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:21 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:22 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:22 compute-1 ceph-mon[81728]: pgmap v850: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:22 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:22 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1284 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:12:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:23.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:23.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:23 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:24 compute-1 ceph-mon[81728]: pgmap v851: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:24 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:25.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:25.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:25 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:25 compute-1 nova_compute[221338]: 2026-01-31 07:12:25.973 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:12:25 compute-1 nova_compute[221338]: 2026-01-31 07:12:25.974 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:12:25 compute-1 nova_compute[221338]: 2026-01-31 07:12:25.974 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:12:25 compute-1 nova_compute[221338]: 2026-01-31 07:12:25.974 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 07:12:25 compute-1 nova_compute[221338]: 2026-01-31 07:12:25.998 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 07:12:25 compute-1 nova_compute[221338]: 2026-01-31 07:12:25.998 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:12:25 compute-1 nova_compute[221338]: 2026-01-31 07:12:25.999 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 07:12:26 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:26 compute-1 nova_compute[221338]: 2026-01-31 07:12:26.033 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:12:27 compute-1 ceph-mon[81728]: pgmap v852: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:27 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:27.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:27.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:28 compute-1 nova_compute[221338]: 2026-01-31 07:12:28.046 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:12:28 compute-1 nova_compute[221338]: 2026-01-31 07:12:28.047 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:12:28 compute-1 nova_compute[221338]: 2026-01-31 07:12:28.047 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:12:28 compute-1 nova_compute[221338]: 2026-01-31 07:12:28.067 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:12:28 compute-1 nova_compute[221338]: 2026-01-31 07:12:28.068 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:12:28 compute-1 nova_compute[221338]: 2026-01-31 07:12:28.068 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:12:28 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:28 compute-1 ceph-mon[81728]: pgmap v853: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:28 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1289 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:12:28 compute-1 nova_compute[221338]: 2026-01-31 07:12:28.973 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:12:28 compute-1 nova_compute[221338]: 2026-01-31 07:12:28.974 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:12:28 compute-1 nova_compute[221338]: 2026-01-31 07:12:28.974 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:12:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:29.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:29 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:29.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:30 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:30 compute-1 ceph-mon[81728]: pgmap v854: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:30 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:30 compute-1 nova_compute[221338]: 2026-01-31 07:12:30.969 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:12:30 compute-1 nova_compute[221338]: 2026-01-31 07:12:30.973 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:12:31 compute-1 nova_compute[221338]: 2026-01-31 07:12:31.006 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:12:31 compute-1 nova_compute[221338]: 2026-01-31 07:12:31.007 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:12:31 compute-1 nova_compute[221338]: 2026-01-31 07:12:31.007 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:12:31 compute-1 nova_compute[221338]: 2026-01-31 07:12:31.007 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:12:31 compute-1 nova_compute[221338]: 2026-01-31 07:12:31.008 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:12:31 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:12:31 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3454063201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:12:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:31.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:31 compute-1 nova_compute[221338]: 2026-01-31 07:12:31.415 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:12:31 compute-1 nova_compute[221338]: 2026-01-31 07:12:31.540 221342 WARNING nova.virt.libvirt.driver [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:12:31 compute-1 nova_compute[221338]: 2026-01-31 07:12:31.541 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5376MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:12:31 compute-1 nova_compute[221338]: 2026-01-31 07:12:31.541 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:12:31 compute-1 nova_compute[221338]: 2026-01-31 07:12:31.541 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:12:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:31.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:31 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:31 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/3454063201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:12:31 compute-1 nova_compute[221338]: 2026-01-31 07:12:31.648 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:12:31 compute-1 nova_compute[221338]: 2026-01-31 07:12:31.649 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:12:31 compute-1 nova_compute[221338]: 2026-01-31 07:12:31.671 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:12:32 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:12:32 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2977257166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:12:32 compute-1 nova_compute[221338]: 2026-01-31 07:12:32.084 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:12:32 compute-1 nova_compute[221338]: 2026-01-31 07:12:32.090 221342 DEBUG nova.compute.provider_tree [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Inventory has not changed in ProviderTree for provider: 6c25628b-2484-4cb3-b051-815f7248948f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:12:32 compute-1 nova_compute[221338]: 2026-01-31 07:12:32.112 221342 DEBUG nova.scheduler.client.report [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Inventory has not changed for provider 6c25628b-2484-4cb3-b051-815f7248948f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:12:32 compute-1 nova_compute[221338]: 2026-01-31 07:12:32.114 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:12:32 compute-1 nova_compute[221338]: 2026-01-31 07:12:32.114 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:12:32 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:32 compute-1 ceph-mon[81728]: pgmap v855: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:32 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:32 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/3300871959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:12:32 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2977257166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:12:32 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1294 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:12:32 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/3551732984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:12:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:33.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:33.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:33 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:34 compute-1 ceph-mon[81728]: pgmap v856: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:34 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:35.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:35.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:35 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:35 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:36 compute-1 ceph-mon[81728]: pgmap v857: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:36 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:37 compute-1 sudo[223278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:37 compute-1 sudo[223278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:37 compute-1 sudo[223278]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:37.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:37 compute-1 sudo[223303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:12:37 compute-1 sudo[223303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:37 compute-1 sudo[223303]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:37 compute-1 sudo[223328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:37 compute-1 sudo[223328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:37 compute-1 sudo[223328]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:37 compute-1 sudo[223353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:12:37 compute-1 sudo[223353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:37.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:37 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:37 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1299 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:12:37 compute-1 sudo[223353]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:38 compute-1 ceph-mon[81728]: pgmap v858: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:38 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:38 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:12:38 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:12:38 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:12:38 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:12:38 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:12:38 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:12:38 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/1301459868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:12:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:39.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:39.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:39 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:39 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/4199279679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:12:40 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:40 compute-1 ceph-mon[81728]: pgmap v859: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:40 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:41 compute-1 podman[223410]: 2026-01-31 07:12:41.131263623 +0000 UTC m=+0.051005933 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 07:12:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:41.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:41.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:41 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:41 compute-1 ceph-mon[81728]: pgmap v860: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:42 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:42 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1304 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:12:43 compute-1 podman[223429]: 2026-01-31 07:12:43.140535722 +0000 UTC m=+0.063180647 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Jan 31 07:12:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:43.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:43.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:43 compute-1 sudo[223455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:43 compute-1 sudo[223455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:43 compute-1 sudo[223455]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:43 compute-1 sudo[223480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:12:43 compute-1 sudo[223480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:43 compute-1 sudo[223480]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:44 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:44 compute-1 ceph-mon[81728]: pgmap v861: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:44 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:12:44 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:12:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:45.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:45 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:12:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:45.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:12:45 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:46 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:46 compute-1 ceph-mon[81728]: pgmap v862: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:47.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:47.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:47 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:47 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1309 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:12:48 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:48 compute-1 ceph-mon[81728]: pgmap v863: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:48 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:49.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:49.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:49 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:50 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:51 compute-1 ceph-mon[81728]: pgmap v864: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:51 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:51.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:51.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:52 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:52 compute-1 ceph-mon[81728]: pgmap v865: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:53 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:53 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1314 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:12:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:53.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:53.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:54 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:54 compute-1 ceph-mon[81728]: pgmap v866: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:55.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:55.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:55 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:55 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:56 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:56 compute-1 ceph-mon[81728]: pgmap v867: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:56 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:12:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:57.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:12:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:57.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:58 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:58 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1319 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:12:59 compute-1 ceph-mon[81728]: pgmap v868: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:59 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:12:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:59.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:12:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:59.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:00 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:00 compute-1 ceph-mon[81728]: pgmap v869: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:00 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:01.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:01 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:01.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:02 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:02 compute-1 ceph-mon[81728]: pgmap v870: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:02 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/1748277647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:13:02 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/1748277647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:13:02 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1324 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:13:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:03.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:03.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:04 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:04 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:05 compute-1 ceph-mon[81728]: pgmap v871: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:05 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:05.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:05.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:05 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:06 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:06 compute-1 ceph-mon[81728]: pgmap v872: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:07.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:07.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:07 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:07 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1329 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:13:08 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:08 compute-1 ceph-mon[81728]: pgmap v873: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:08 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:09.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:09.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:09 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:10 compute-1 ceph-mon[81728]: pgmap v874: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:10 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:10 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:13:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:11.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:13:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:11.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:11 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:12 compute-1 podman[223505]: 2026-01-31 07:13:12.134525328 +0000 UTC m=+0.062390203 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 07:13:13 compute-1 ceph-mon[81728]: pgmap v875: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:13 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:13 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1334 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:13:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:13.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:13.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:14 compute-1 podman[223524]: 2026-01-31 07:13:14.130224911 +0000 UTC m=+0.059370881 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:13:14 compute-1 ceph-mon[81728]: pgmap v876: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:14 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:15 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:15.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:15.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:15 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:16 compute-1 ceph-mon[81728]: pgmap v877: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:16 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:17.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:17 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:17 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:17 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1339 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:13:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:17.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:18 compute-1 ceph-mon[81728]: pgmap v878: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:18 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:19.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:19.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:19 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:13:19.889 140083 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:13:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:13:19.889 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:13:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:13:19.889 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:13:20 compute-1 ceph-mon[81728]: pgmap v879: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:20 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:20 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:21.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:21.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:21 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:22 compute-1 ceph-mon[81728]: pgmap v880: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:22 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:22 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1344 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:13:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:23.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:23.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:23 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:24 compute-1 ceph-mon[81728]: pgmap v881: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:24 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:25.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:25.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:25 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:26 compute-1 ceph-mon[81728]: pgmap v882: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:26 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:13:26.117757) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843606117830, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1264, "num_deletes": 251, "total_data_size": 2184259, "memory_usage": 2228464, "flush_reason": "Manual Compaction"}
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843606178939, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 1433942, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25050, "largest_seqno": 26309, "table_properties": {"data_size": 1428804, "index_size": 2406, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13412, "raw_average_key_size": 20, "raw_value_size": 1417507, "raw_average_value_size": 2194, "num_data_blocks": 106, "num_entries": 646, "num_filter_entries": 646, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843527, "oldest_key_time": 1769843527, "file_creation_time": 1769843606, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 61212 microseconds, and 3548 cpu microseconds.
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:13:26.178980) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 1433942 bytes OK
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:13:26.178999) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:13:26.191032) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:13:26.191082) EVENT_LOG_v1 {"time_micros": 1769843606191071, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:13:26.191105) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 2178064, prev total WAL file size 2178064, number of live WAL files 2.
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:13:26.191713) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(1400KB)], [48(9960KB)]
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843606191778, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 11633721, "oldest_snapshot_seqno": -1}
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 6183 keys, 9990501 bytes, temperature: kUnknown
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843606347419, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 9990501, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9950508, "index_size": 23435, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15493, "raw_key_size": 162098, "raw_average_key_size": 26, "raw_value_size": 9838952, "raw_average_value_size": 1591, "num_data_blocks": 934, "num_entries": 6183, "num_filter_entries": 6183, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842193, "oldest_key_time": 0, "file_creation_time": 1769843606, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:13:26.347703) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9990501 bytes
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:13:26.352023) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 74.7 rd, 64.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.7 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(15.1) write-amplify(7.0) OK, records in: 6698, records dropped: 515 output_compression: NoCompression
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:13:26.352081) EVENT_LOG_v1 {"time_micros": 1769843606352058, "job": 28, "event": "compaction_finished", "compaction_time_micros": 155758, "compaction_time_cpu_micros": 15876, "output_level": 6, "num_output_files": 1, "total_output_size": 9990501, "num_input_records": 6698, "num_output_records": 6183, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843606352528, "job": 28, "event": "table_file_deletion", "file_number": 50}
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843606353786, "job": 28, "event": "table_file_deletion", "file_number": 48}
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:13:26.191647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:13:26.353825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:13:26.353831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:13:26.353833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:13:26.353836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:13:26 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:13:26.353838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:13:27 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:27.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:27.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:28 compute-1 nova_compute[221338]: 2026-01-31 07:13:28.110 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:13:28 compute-1 nova_compute[221338]: 2026-01-31 07:13:28.126 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:13:28 compute-1 nova_compute[221338]: 2026-01-31 07:13:28.126 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:13:28 compute-1 nova_compute[221338]: 2026-01-31 07:13:28.126 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:13:28 compute-1 ceph-mon[81728]: pgmap v883: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:28 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:28 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1349 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:13:28 compute-1 nova_compute[221338]: 2026-01-31 07:13:28.974 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:13:28 compute-1 nova_compute[221338]: 2026-01-31 07:13:28.974 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:13:29 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:29.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:29.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:29 compute-1 nova_compute[221338]: 2026-01-31 07:13:29.973 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:13:29 compute-1 nova_compute[221338]: 2026-01-31 07:13:29.974 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:13:29 compute-1 nova_compute[221338]: 2026-01-31 07:13:29.974 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:13:29 compute-1 nova_compute[221338]: 2026-01-31 07:13:29.997 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:13:30 compute-1 ceph-mon[81728]: pgmap v884: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:30 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:30 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:30 compute-1 nova_compute[221338]: 2026-01-31 07:13:30.973 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:13:30 compute-1 nova_compute[221338]: 2026-01-31 07:13:30.974 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:13:30 compute-1 nova_compute[221338]: 2026-01-31 07:13:30.974 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:13:31 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:31.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:31.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:32 compute-1 ceph-mon[81728]: pgmap v885: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:32 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:32 compute-1 nova_compute[221338]: 2026-01-31 07:13:32.974 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:13:33 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:33 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1354 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:13:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:33.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:33.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:33 compute-1 nova_compute[221338]: 2026-01-31 07:13:33.839 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:13:33 compute-1 nova_compute[221338]: 2026-01-31 07:13:33.840 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:13:33 compute-1 nova_compute[221338]: 2026-01-31 07:13:33.840 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:13:33 compute-1 nova_compute[221338]: 2026-01-31 07:13:33.840 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:13:33 compute-1 nova_compute[221338]: 2026-01-31 07:13:33.840 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:13:34 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:13:34 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3430617717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:13:34 compute-1 nova_compute[221338]: 2026-01-31 07:13:34.278 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:13:34 compute-1 nova_compute[221338]: 2026-01-31 07:13:34.408 221342 WARNING nova.virt.libvirt.driver [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:13:34 compute-1 nova_compute[221338]: 2026-01-31 07:13:34.409 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5341MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:13:34 compute-1 nova_compute[221338]: 2026-01-31 07:13:34.409 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:13:34 compute-1 nova_compute[221338]: 2026-01-31 07:13:34.409 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:13:34 compute-1 ceph-mon[81728]: pgmap v886: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:34 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:34 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/3430617717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:13:34 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/3690692226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:13:35 compute-1 nova_compute[221338]: 2026-01-31 07:13:35.051 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:13:35 compute-1 nova_compute[221338]: 2026-01-31 07:13:35.052 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:13:35 compute-1 nova_compute[221338]: 2026-01-31 07:13:35.118 221342 DEBUG nova.scheduler.client.report [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Refreshing inventories for resource provider 6c25628b-2484-4cb3-b051-815f7248948f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 07:13:35 compute-1 nova_compute[221338]: 2026-01-31 07:13:35.241 221342 DEBUG nova.scheduler.client.report [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Updating ProviderTree inventory for provider 6c25628b-2484-4cb3-b051-815f7248948f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 07:13:35 compute-1 nova_compute[221338]: 2026-01-31 07:13:35.242 221342 DEBUG nova.compute.provider_tree [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Updating inventory in ProviderTree for provider 6c25628b-2484-4cb3-b051-815f7248948f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:13:35 compute-1 nova_compute[221338]: 2026-01-31 07:13:35.297 221342 DEBUG nova.scheduler.client.report [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Refreshing aggregate associations for resource provider 6c25628b-2484-4cb3-b051-815f7248948f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 07:13:35 compute-1 nova_compute[221338]: 2026-01-31 07:13:35.325 221342 DEBUG nova.scheduler.client.report [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Refreshing trait associations for resource provider 6c25628b-2484-4cb3-b051-815f7248948f, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 07:13:35 compute-1 nova_compute[221338]: 2026-01-31 07:13:35.340 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:13:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:35.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:35 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:35 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:35.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:35 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:13:35 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3258350859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:13:35 compute-1 nova_compute[221338]: 2026-01-31 07:13:35.771 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:13:35 compute-1 nova_compute[221338]: 2026-01-31 07:13:35.778 221342 DEBUG nova.compute.provider_tree [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Inventory has not changed in ProviderTree for provider: 6c25628b-2484-4cb3-b051-815f7248948f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:13:35 compute-1 nova_compute[221338]: 2026-01-31 07:13:35.863 221342 DEBUG nova.scheduler.client.report [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Inventory has not changed for provider 6c25628b-2484-4cb3-b051-815f7248948f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:13:35 compute-1 nova_compute[221338]: 2026-01-31 07:13:35.866 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:13:35 compute-1 nova_compute[221338]: 2026-01-31 07:13:35.867 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:13:35 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:36 compute-1 ceph-mon[81728]: pgmap v887: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:36 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/3258350859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:13:36 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/3489569883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:13:36 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:37.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:37.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:37 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:37 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1359 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:13:38 compute-1 ceph-mon[81728]: pgmap v888: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:38 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:39.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:39.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:39 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:40 compute-1 ceph-mon[81728]: pgmap v889: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:40 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:40 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/3164076236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:13:40 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/2231318574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:13:40 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:41.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:41.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:42 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:43 compute-1 podman[223594]: 2026-01-31 07:13:43.121391187 +0000 UTC m=+0.043100470 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:13:43 compute-1 ceph-mon[81728]: pgmap v890: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:43 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:43.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:43.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:43 compute-1 sudo[223615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:43 compute-1 sudo[223615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:43 compute-1 sudo[223615]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:43 compute-1 sudo[223640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:13:43 compute-1 sudo[223640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:43 compute-1 sudo[223640]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:43 compute-1 sudo[223665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:43 compute-1 sudo[223665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:43 compute-1 sudo[223665]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:43 compute-1 sudo[223690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:13:43 compute-1 sudo[223690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:44 compute-1 sudo[223690]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:44 compute-1 ceph-mon[81728]: pgmap v891: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:44 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:45 compute-1 podman[223745]: 2026-01-31 07:13:45.134626374 +0000 UTC m=+0.062963288 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller)
Jan 31 07:13:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:45.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:45 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:45 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:13:45 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:13:45 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:13:45 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:13:45 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:13:45 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:13:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:45.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:45 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:46 compute-1 ceph-mon[81728]: pgmap v892: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:46 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:46 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1369 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:13:46 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:47.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:47.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:48 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:48 compute-1 ceph-mon[81728]: pgmap v893: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:48 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:49.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:49.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:50 compute-1 ceph-mon[81728]: pgmap v894: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:50 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:50 compute-1 sudo[223773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:50 compute-1 sudo[223773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:50 compute-1 sudo[223773]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:50 compute-1 sudo[223798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:13:50 compute-1 sudo[223798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:50 compute-1 sudo[223798]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:50 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:51 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:51 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:13:51 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:13:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:51.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:51.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:52 compute-1 ceph-mon[81728]: pgmap v895: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:52 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:52 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1374 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:13:53 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:53.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:13:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:53.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:13:54 compute-1 ceph-mon[81728]: pgmap v896: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:54 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:55.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:55.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:56 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:56 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:56 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:57 compute-1 ceph-mon[81728]: pgmap v897: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:57 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:57.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:57.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:58 compute-1 ceph-mon[81728]: pgmap v898: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:58 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:58 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1379 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:13:59 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:13:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:59.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:13:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:59.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:14:00 compute-1 ceph-mon[81728]: pgmap v899: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:00 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:01 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:01 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:01 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:14:01 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3648688807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:14:01 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:14:01 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3648688807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:14:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:01.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:14:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:01.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:14:02 compute-1 ceph-mon[81728]: pgmap v900: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:02 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:02 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/3648688807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:14:02 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/3648688807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:14:03 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:03 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1384 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:14:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:03.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:03.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:04 compute-1 ceph-mon[81728]: pgmap v901: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:04 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:05.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:05 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:05.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:06 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:06 compute-1 ceph-mon[81728]: pgmap v902: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:06 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:07.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:07 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:07.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:08 compute-1 ceph-mon[81728]: pgmap v903: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:08 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:08 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1389 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:14:08 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:09.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:09 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:09.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:10 compute-1 ceph-mon[81728]: pgmap v904: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:10 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:11 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:11.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:11 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:11.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:12 compute-1 sshd-session[223824]: Received disconnect from 36.111.150.151 port 41514:11:  [preauth]
Jan 31 07:14:12 compute-1 sshd-session[223824]: Disconnected from authenticating user root 36.111.150.151 port 41514 [preauth]
Jan 31 07:14:12 compute-1 ceph-mon[81728]: pgmap v905: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:12 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:12 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1394 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:14:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:13.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:13 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:13.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:14 compute-1 podman[223826]: 2026-01-31 07:14:14.125113468 +0000 UTC m=+0.050199522 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 07:14:14 compute-1 ceph-mon[81728]: pgmap v906: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:14 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:15.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:14:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:15.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:14:15 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:16 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:16 compute-1 podman[223846]: 2026-01-31 07:14:16.151907285 +0000 UTC m=+0.079815835 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 07:14:16 compute-1 ceph-mon[81728]: pgmap v907: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:16 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:17.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:17.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:17 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:17 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1399 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:14:18 compute-1 ceph-mon[81728]: pgmap v908: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:18 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:14:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:19.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:14:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:19.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:19 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:19 compute-1 ceph-mon[81728]: pgmap v909: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:14:19.890 140083 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:14:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:14:19.891 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:14:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:14:19.891 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:14:20 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:21 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:21.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:21.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:21 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:21 compute-1 ceph-mon[81728]: pgmap v910: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:14:22.676953) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843662677037, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 988, "num_deletes": 250, "total_data_size": 1594966, "memory_usage": 1613368, "flush_reason": "Manual Compaction"}
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843662685024, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 706955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26314, "largest_seqno": 27297, "table_properties": {"data_size": 703262, "index_size": 1281, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10910, "raw_average_key_size": 21, "raw_value_size": 694861, "raw_average_value_size": 1338, "num_data_blocks": 55, "num_entries": 519, "num_filter_entries": 519, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843607, "oldest_key_time": 1769843607, "file_creation_time": 1769843662, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 8112 microseconds, and 3780 cpu microseconds.
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:14:22.685070) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 706955 bytes OK
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:14:22.685095) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:14:22.688972) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:14:22.688995) EVENT_LOG_v1 {"time_micros": 1769843662688986, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:14:22.689021) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1589912, prev total WAL file size 1589912, number of live WAL files 2.
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:14:22.689572) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(690KB)], [51(9756KB)]
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843662689614, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 10697456, "oldest_snapshot_seqno": -1}
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 6211 keys, 7152004 bytes, temperature: kUnknown
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843662749677, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 7152004, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7115893, "index_size": 19489, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15557, "raw_key_size": 163498, "raw_average_key_size": 26, "raw_value_size": 7007742, "raw_average_value_size": 1128, "num_data_blocks": 760, "num_entries": 6211, "num_filter_entries": 6211, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842193, "oldest_key_time": 0, "file_creation_time": 1769843662, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:14:22.749969) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7152004 bytes
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:14:22.751439) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.9 rd, 118.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.5 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(25.2) write-amplify(10.1) OK, records in: 6702, records dropped: 491 output_compression: NoCompression
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:14:22.751463) EVENT_LOG_v1 {"time_micros": 1769843662751454, "job": 30, "event": "compaction_finished", "compaction_time_micros": 60145, "compaction_time_cpu_micros": 15861, "output_level": 6, "num_output_files": 1, "total_output_size": 7152004, "num_input_records": 6702, "num_output_records": 6211, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843662751688, "job": 30, "event": "table_file_deletion", "file_number": 53}
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843662752506, "job": 30, "event": "table_file_deletion", "file_number": 51}
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:14:22.689524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:14:22.752531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:14:22.752535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:14:22.752537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:14:22.752538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:14:22 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:14:22.752540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:14:22 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:22 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1404 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:14:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:14:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:23.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:14:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:23.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:24 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:24 compute-1 ceph-mon[81728]: pgmap v911: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:25 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:14:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:25.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:14:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:14:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:25.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:14:26 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:26 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:26 compute-1 ceph-mon[81728]: pgmap v912: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:27 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:27.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:27.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:28 compute-1 ceph-mon[81728]: pgmap v913: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:28 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:28 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1409 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:14:29 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:29.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:29.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:30 compute-1 ceph-mon[81728]: pgmap v914: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:30 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:30 compute-1 nova_compute[221338]: 2026-01-31 07:14:30.867 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:14:30 compute-1 nova_compute[221338]: 2026-01-31 07:14:30.868 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:14:30 compute-1 nova_compute[221338]: 2026-01-31 07:14:30.868 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:14:30 compute-1 nova_compute[221338]: 2026-01-31 07:14:30.885 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:14:30 compute-1 nova_compute[221338]: 2026-01-31 07:14:30.886 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:14:30 compute-1 nova_compute[221338]: 2026-01-31 07:14:30.886 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:14:30 compute-1 nova_compute[221338]: 2026-01-31 07:14:30.886 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:14:30 compute-1 nova_compute[221338]: 2026-01-31 07:14:30.887 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:14:30 compute-1 nova_compute[221338]: 2026-01-31 07:14:30.887 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:14:30 compute-1 nova_compute[221338]: 2026-01-31 07:14:30.973 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:14:31 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:31 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:14:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:31.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:14:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:31.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:31 compute-1 nova_compute[221338]: 2026-01-31 07:14:31.968 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:14:31 compute-1 nova_compute[221338]: 2026-01-31 07:14:31.973 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:14:32 compute-1 ceph-mon[81728]: pgmap v915: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:32 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:33.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:33 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:33 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1414 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:14:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:33.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:33 compute-1 nova_compute[221338]: 2026-01-31 07:14:33.973 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:14:34 compute-1 nova_compute[221338]: 2026-01-31 07:14:34.009 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:14:34 compute-1 nova_compute[221338]: 2026-01-31 07:14:34.009 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:14:34 compute-1 nova_compute[221338]: 2026-01-31 07:14:34.010 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:14:34 compute-1 nova_compute[221338]: 2026-01-31 07:14:34.010 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:14:34 compute-1 nova_compute[221338]: 2026-01-31 07:14:34.010 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:14:34 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:14:34 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1809078508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:14:34 compute-1 nova_compute[221338]: 2026-01-31 07:14:34.480 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:14:34 compute-1 ceph-mon[81728]: pgmap v916: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:34 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:34 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/1809078508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:14:34 compute-1 nova_compute[221338]: 2026-01-31 07:14:34.638 221342 WARNING nova.virt.libvirt.driver [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:14:34 compute-1 nova_compute[221338]: 2026-01-31 07:14:34.640 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5356MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:14:34 compute-1 nova_compute[221338]: 2026-01-31 07:14:34.640 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:14:34 compute-1 nova_compute[221338]: 2026-01-31 07:14:34.640 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:14:34 compute-1 nova_compute[221338]: 2026-01-31 07:14:34.734 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:14:34 compute-1 nova_compute[221338]: 2026-01-31 07:14:34.735 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:14:34 compute-1 nova_compute[221338]: 2026-01-31 07:14:34.765 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:14:35 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:14:35 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3266230869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:14:35 compute-1 nova_compute[221338]: 2026-01-31 07:14:35.206 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:14:35 compute-1 nova_compute[221338]: 2026-01-31 07:14:35.244 221342 DEBUG nova.compute.provider_tree [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Inventory has not changed in ProviderTree for provider: 6c25628b-2484-4cb3-b051-815f7248948f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:14:35 compute-1 nova_compute[221338]: 2026-01-31 07:14:35.477 221342 DEBUG nova.scheduler.client.report [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Inventory has not changed for provider 6c25628b-2484-4cb3-b051-815f7248948f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:14:35 compute-1 nova_compute[221338]: 2026-01-31 07:14:35.478 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:14:35 compute-1 nova_compute[221338]: 2026-01-31 07:14:35.479 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:14:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:35.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:35 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:35 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/3266230869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:14:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:14:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:35.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:14:36 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:36 compute-1 ceph-mon[81728]: pgmap v917: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:36 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:36 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/420426607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:14:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:37.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:37 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:37 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/1523703620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:14:37 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:37.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:38 compute-1 ceph-mon[81728]: pgmap v918: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:38 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1419 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:14:38 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:14:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:39.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:14:39 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:39.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:40 compute-1 ceph-mon[81728]: pgmap v919: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:40 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:41 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:14:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:41.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:14:41 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:41.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:42 compute-1 ceph-mon[81728]: pgmap v920: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:42 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:42 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/2659911957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:14:42 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1424 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:14:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:14:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:43.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:14:43 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/3319490362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:14:43 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:14:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:43.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:14:44 compute-1 ceph-mon[81728]: pgmap v921: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:44 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:45 compute-1 podman[223916]: 2026-01-31 07:14:45.121401866 +0000 UTC m=+0.045480953 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:14:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:45.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:45 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:14:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:45.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:14:46 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:46 compute-1 ceph-mon[81728]: pgmap v922: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:46 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:47 compute-1 podman[223937]: 2026-01-31 07:14:47.144593786 +0000 UTC m=+0.066840993 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 07:14:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:47.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:47 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:47 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1429 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:14:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:47.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:48 compute-1 ceph-mon[81728]: pgmap v923: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:48 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:49.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:14:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:49.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:14:49 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:50 compute-1 ceph-mon[81728]: pgmap v924: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:50 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:50 compute-1 sudo[223965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:50 compute-1 sudo[223965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:50 compute-1 sudo[223965]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:50 compute-1 sudo[223990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:14:50 compute-1 sudo[223990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:50 compute-1 sudo[223990]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:50 compute-1 sudo[224015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:50 compute-1 sudo[224015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:50 compute-1 sudo[224015]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:51 compute-1 sudo[224040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:14:51 compute-1 sudo[224040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:51 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:51 compute-1 podman[224138]: 2026-01-31 07:14:51.445759801 +0000 UTC m=+0.064428368 container exec 6672f2dbf6180479da9a6d4a3be2a0622c308e279fdc39713e442740f41af4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:14:51 compute-1 podman[224138]: 2026-01-31 07:14:51.548310462 +0000 UTC m=+0.166979039 container exec_died 6672f2dbf6180479da9a6d4a3be2a0622c308e279fdc39713e442740f41af4cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-ef73c6e0-6d85-55c2-9347-1f544d3e3d3a-crash-compute-1, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:14:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:51.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:14:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:51.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:14:51 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:51 compute-1 sudo[224040]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:51 compute-1 sudo[224256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:51 compute-1 sudo[224256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:51 compute-1 sudo[224256]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:51 compute-1 sudo[224281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:14:51 compute-1 sudo[224281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:51 compute-1 sudo[224281]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:52 compute-1 sudo[224306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:52 compute-1 sudo[224306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:52 compute-1 sudo[224306]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:52 compute-1 sudo[224331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:14:52 compute-1 sudo[224331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:52 compute-1 sudo[224331]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:52 compute-1 ceph-mon[81728]: pgmap v925: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:52 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:14:52 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:14:52 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:52 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:14:52 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:14:52 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1434 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:14:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:53.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:53.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:53 compute-1 ceph-mon[81728]: pgmap v926: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:53 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:14:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:14:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:14:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:14:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:14:53 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:14:55 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:55.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:14:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:55.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:14:56 compute-1 ceph-mon[81728]: pgmap v927: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:56 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:56 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:57 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:57.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:57.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:58 compute-1 ceph-mon[81728]: pgmap v928: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:58 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:58 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1439 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:14:58 compute-1 sudo[224387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:58 compute-1 sudo[224387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:58 compute-1 sudo[224387]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:58 compute-1 sudo[224412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:14:58 compute-1 sudo[224412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:58 compute-1 sudo[224412]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:59 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:14:59 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:14:59 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:14:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:59.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:14:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:59.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:00 compute-1 ceph-mon[81728]: pgmap v929: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:00 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:01 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:15:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:01.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:15:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:02.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:02 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:03 compute-1 ceph-mon[81728]: pgmap v930: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:03 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:03 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/2460259664' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:15:03 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/2460259664' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:15:03 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:03 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1444 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:15:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:03.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:04.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:04 compute-1 ceph-mon[81728]: pgmap v931: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:04 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:05 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:05.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:06 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:15:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:06.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:15:06 compute-1 ceph-mon[81728]: pgmap v932: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:06 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:07 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:07.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:15:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:08.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:15:08 compute-1 ceph-mon[81728]: pgmap v933: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:08 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:08 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1449 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:15:09 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:09.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:10.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:10 compute-1 ceph-mon[81728]: pgmap v934: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:10 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:11 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:15:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:11.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:15:11 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:15:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:12.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:15:12 compute-1 ceph-mon[81728]: pgmap v935: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:12 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:13.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:13 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:13 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1454 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:15:13 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:14.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:14 compute-1 sshd-session[224437]: Invalid user sol from 2.57.122.238 port 57068
Jan 31 07:15:14 compute-1 sshd-session[224437]: Connection closed by invalid user sol 2.57.122.238 port 57068 [preauth]
Jan 31 07:15:14 compute-1 ceph-mon[81728]: pgmap v936: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:14 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:15.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:15 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:16 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:16 compute-1 podman[224439]: 2026-01-31 07:15:16.158109188 +0000 UTC m=+0.082293690 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:15:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:16.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:16 compute-1 ceph-mon[81728]: pgmap v937: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:16 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:17.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:17 compute-1 ceph-mon[81728]: pgmap v938: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:17 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:17 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1459 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:15:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:18.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:18 compute-1 podman[224458]: 2026-01-31 07:15:18.186710928 +0000 UTC m=+0.105041001 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 07:15:18 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:15:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:19.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:15:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:15:19.891 140083 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:15:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:15:19.893 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:15:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:15:19.893 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:15:19 compute-1 ceph-mon[81728]: pgmap v939: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:19 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:20.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:21 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:21 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:15:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:21.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:15:22 compute-1 ceph-mon[81728]: pgmap v940: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:22 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:22.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:23 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:23 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1464 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:15:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:23.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:24 compute-1 ceph-mon[81728]: pgmap v941: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:24 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:24.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:25 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:25.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:26 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:26 compute-1 ceph-mon[81728]: pgmap v942: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:26 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:26.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:27 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:27.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:28.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:28 compute-1 ceph-mon[81728]: pgmap v943: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:28 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:28 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1469 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:15:29 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:29 compute-1 nova_compute[221338]: 2026-01-31 07:15:29.474 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:15:29 compute-1 nova_compute[221338]: 2026-01-31 07:15:29.494 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:15:29 compute-1 nova_compute[221338]: 2026-01-31 07:15:29.495 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:15:29 compute-1 nova_compute[221338]: 2026-01-31 07:15:29.495 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:15:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:15:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:29.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:15:29 compute-1 nova_compute[221338]: 2026-01-31 07:15:29.974 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:15:29 compute-1 nova_compute[221338]: 2026-01-31 07:15:29.974 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:15:29 compute-1 nova_compute[221338]: 2026-01-31 07:15:29.975 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:15:29 compute-1 nova_compute[221338]: 2026-01-31 07:15:29.988 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:15:29 compute-1 nova_compute[221338]: 2026-01-31 07:15:29.989 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:15:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:30.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:30 compute-1 ceph-mon[81728]: pgmap v944: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:30 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:30 compute-1 nova_compute[221338]: 2026-01-31 07:15:30.973 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:15:31 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:31 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:31.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:31 compute-1 nova_compute[221338]: 2026-01-31 07:15:31.968 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:15:31 compute-1 nova_compute[221338]: 2026-01-31 07:15:31.973 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:15:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:15:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:32.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:15:32 compute-1 ceph-mon[81728]: pgmap v945: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:32 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:32 compute-1 nova_compute[221338]: 2026-01-31 07:15:32.974 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:15:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:15:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:33.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:15:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:34.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:34 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:34 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1474 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:15:34 compute-1 nova_compute[221338]: 2026-01-31 07:15:34.973 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:15:35 compute-1 nova_compute[221338]: 2026-01-31 07:15:35.003 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:15:35 compute-1 nova_compute[221338]: 2026-01-31 07:15:35.004 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:15:35 compute-1 nova_compute[221338]: 2026-01-31 07:15:35.004 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:15:35 compute-1 nova_compute[221338]: 2026-01-31 07:15:35.004 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:15:35 compute-1 nova_compute[221338]: 2026-01-31 07:15:35.004 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:15:35 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:15:35 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2610110318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:15:35 compute-1 nova_compute[221338]: 2026-01-31 07:15:35.460 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:15:35 compute-1 nova_compute[221338]: 2026-01-31 07:15:35.609 221342 WARNING nova.virt.libvirt.driver [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:15:35 compute-1 nova_compute[221338]: 2026-01-31 07:15:35.611 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5360MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:15:35 compute-1 nova_compute[221338]: 2026-01-31 07:15:35.611 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:15:35 compute-1 nova_compute[221338]: 2026-01-31 07:15:35.611 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:15:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:35.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:35 compute-1 nova_compute[221338]: 2026-01-31 07:15:35.677 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:15:35 compute-1 nova_compute[221338]: 2026-01-31 07:15:35.678 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:15:35 compute-1 nova_compute[221338]: 2026-01-31 07:15:35.695 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:15:35 compute-1 ceph-mon[81728]: pgmap v946: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:35 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:35 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:35 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:35 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2610110318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:15:36 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:36 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:15:36 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3578357891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:15:36 compute-1 nova_compute[221338]: 2026-01-31 07:15:36.157 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:15:36 compute-1 nova_compute[221338]: 2026-01-31 07:15:36.163 221342 DEBUG nova.compute.provider_tree [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Inventory has not changed in ProviderTree for provider: 6c25628b-2484-4cb3-b051-815f7248948f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:15:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:36.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:36 compute-1 nova_compute[221338]: 2026-01-31 07:15:36.266 221342 DEBUG nova.scheduler.client.report [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Inventory has not changed for provider 6c25628b-2484-4cb3-b051-815f7248948f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:15:36 compute-1 nova_compute[221338]: 2026-01-31 07:15:36.268 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:15:36 compute-1 nova_compute[221338]: 2026-01-31 07:15:36.268 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:15:36 compute-1 ceph-mon[81728]: pgmap v947: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:36 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/3578357891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:15:36 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:36 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/2396348519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:15:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:37.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:38 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:38 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/2574679015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:15:38 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1479 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:15:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:38.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:39 compute-1 ceph-mon[81728]: pgmap v948: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:39 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:15:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:39.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:15:40 compute-1 ceph-mon[81728]: pgmap v949: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:40 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:15:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:40.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:15:41 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:41 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:41.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:42.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:42 compute-1 ceph-mon[81728]: pgmap v950: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:42 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:42 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:42 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1484 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:15:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:43.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:43 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:44.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:44 compute-1 ceph-mon[81728]: pgmap v951: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:44 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/1041096414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:15:44 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:44 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/875464181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:15:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:45.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:45 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:46 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:46.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:46 compute-1 ceph-mon[81728]: pgmap v952: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:46 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:47 compute-1 podman[224530]: 2026-01-31 07:15:47.118802288 +0000 UTC m=+0.045144705 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 07:15:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:47.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:47 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:47 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1489 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:15:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:48.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:48 compute-1 ceph-mon[81728]: pgmap v953: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:48 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:49 compute-1 podman[224550]: 2026-01-31 07:15:49.140551621 +0000 UTC m=+0.067645249 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:15:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:15:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:49.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:15:49 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:15:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:50.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:15:50 compute-1 ceph-mon[81728]: pgmap v954: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:50 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:51 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:51.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:51 compute-1 ceph-mon[81728]: pgmap v955: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:51 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:52.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:53 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:53 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1494 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:15:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:53.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:54 compute-1 ceph-mon[81728]: pgmap v956: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:54 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:54.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:55 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:15:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:55.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:15:56 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:56 compute-1 ceph-mon[81728]: pgmap v957: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:56 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:56.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:57 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:57.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:58.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:58 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:58 compute-1 ceph-mon[81728]: pgmap v958: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:58 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1499 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:15:58 compute-1 sudo[224576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:58 compute-1 sudo[224576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:58 compute-1 sudo[224576]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:59 compute-1 sudo[224601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:15:59 compute-1 sudo[224601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:59 compute-1 sudo[224601]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:59 compute-1 sudo[224626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:59 compute-1 sudo[224626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:59 compute-1 sudo[224626]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:59 compute-1 sudo[224651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/ef73c6e0-6d85-55c2-9347-1f544d3e3d3a/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:15:59 compute-1 sudo[224651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:59 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:15:59 compute-1 sudo[224651]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:15:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:59.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:00.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:00 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:00 compute-1 ceph-mon[81728]: pgmap v959: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:00 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:16:00 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:16:00 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:16:00 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:16:00 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:16:00 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:16:01 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:01 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:01.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:02.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:02 compute-1 ceph-mon[81728]: pgmap v960: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:02 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:02 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/4254534780' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:16:02 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/4254534780' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:16:03 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:03 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1504 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:16:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:03.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:04.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:04 compute-1 ceph-mon[81728]: pgmap v961: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:04 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:05 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:05 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:16:05 compute-1 sudo[224707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:16:05 compute-1 sudo[224707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:05 compute-1 sudo[224707]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:05 compute-1 sudo[224732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:16:05 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:05 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:05 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:05.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:05 compute-1 sudo[224732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:05 compute-1 sudo[224732]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:06 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:06 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:06 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:06 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:06.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:06 compute-1 ceph-mon[81728]: pgmap v962: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:06 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:06 compute-1 ceph-mon[81728]: from='mgr.14132 192.168.122.100:0/3572103130' entity='mgr.compute-0.gghdjs' 
Jan 31 07:16:07 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:07 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:07 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:07.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:07 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:08 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:08 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:08 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:08.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:08 compute-1 ceph-mon[81728]: pgmap v963: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:08 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:08 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1509 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:16:08 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:09 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:09 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:16:09 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:09.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:16:09 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:10 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:10 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:16:10 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:10.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:16:10 compute-1 ceph-mon[81728]: pgmap v964: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:10 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:11 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:11 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:11 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:16:11 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:11.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:16:11 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:16:11.882259) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843771882311, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1731, "num_deletes": 251, "total_data_size": 3240686, "memory_usage": 3297344, "flush_reason": "Manual Compaction"}
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843771903354, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 2107029, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27302, "largest_seqno": 29028, "table_properties": {"data_size": 2100265, "index_size": 3579, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17404, "raw_average_key_size": 21, "raw_value_size": 2085362, "raw_average_value_size": 2527, "num_data_blocks": 157, "num_entries": 825, "num_filter_entries": 825, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843662, "oldest_key_time": 1769843662, "file_creation_time": 1769843771, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 21136 microseconds, and 4496 cpu microseconds.
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:16:11.903401) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 2107029 bytes OK
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:16:11.903426) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:16:11.907910) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:16:11.907927) EVENT_LOG_v1 {"time_micros": 1769843771907922, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:16:11.907954) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3232530, prev total WAL file size 3232530, number of live WAL files 2.
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:16:11.908621) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(2057KB)], [54(6984KB)]
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843771908695, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 9259033, "oldest_snapshot_seqno": -1}
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 6519 keys, 7630605 bytes, temperature: kUnknown
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843771985147, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 7630605, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7592359, "index_size": 20856, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 171332, "raw_average_key_size": 26, "raw_value_size": 7478536, "raw_average_value_size": 1147, "num_data_blocks": 817, "num_entries": 6519, "num_filter_entries": 6519, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769842193, "oldest_key_time": 0, "file_creation_time": 1769843771, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8fdde4a6-77fc-4fe2-bbb0-906c3e5c26f6", "db_session_id": "HV1COZEZD0ZIIOS10G8C", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:16:11.986003) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 7630605 bytes
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:16:11.990105) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.1 rd, 99.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 6.8 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(8.0) write-amplify(3.6) OK, records in: 7036, records dropped: 517 output_compression: NoCompression
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:16:11.990158) EVENT_LOG_v1 {"time_micros": 1769843771990142, "job": 32, "event": "compaction_finished", "compaction_time_micros": 77110, "compaction_time_cpu_micros": 18278, "output_level": 6, "num_output_files": 1, "total_output_size": 7630605, "num_input_records": 7036, "num_output_records": 6519, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843771990634, "job": 32, "event": "table_file_deletion", "file_number": 56}
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843771991409, "job": 32, "event": "table_file_deletion", "file_number": 54}
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:16:11.908557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:16:11.991466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:16:11.991471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:16:11.991473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:16:11.991475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:16:11 compute-1 ceph-mon[81728]: rocksdb: (Original Log Time 2026/01/31-07:16:11.991477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:16:12 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:12 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:16:12 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:12.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:16:12 compute-1 ceph-mon[81728]: pgmap v965: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:12 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:12 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1514 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:16:13 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:13 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:16:13 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:13.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:16:13 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:14 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:14 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:14 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:14.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:14 compute-1 ceph-mon[81728]: pgmap v966: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:14 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:15 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:15 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:15 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:15.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:16 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:16 compute-1 ceph-mon[81728]: pgmap v967: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:16 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:16 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:16 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:16:16 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:16.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:16:17 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:17 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:17 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:17 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:17.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:18 compute-1 podman[224757]: 2026-01-31 07:16:18.13484602 +0000 UTC m=+0.053975466 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:16:18 compute-1 ceph-mon[81728]: pgmap v968: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:18 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:18 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1519 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:16:18 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:18 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:16:18 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:18.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:16:19 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:19 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:19 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:19 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:19.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:16:19.893 140083 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:16:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:16:19.894 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:16:19 compute-1 ovn_metadata_agent[140078]: 2026-01-31 07:16:19.894 140083 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:16:20 compute-1 podman[224777]: 2026-01-31 07:16:20.138689074 +0000 UTC m=+0.066359074 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:16:20 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:20 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:20 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:20.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:20 compute-1 ceph-mon[81728]: pgmap v969: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:20 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:21 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:21 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:21 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:21 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:21 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:21.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:22 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:22 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:22 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:22.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:22 compute-1 ceph-mon[81728]: pgmap v970: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:22 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:23 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:23 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1524 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:16:23 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:23 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:23 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:23.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:24 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:24 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:24 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:24.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:24 compute-1 ceph-mon[81728]: pgmap v971: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:24 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:25 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:25 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:25 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:25.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:25 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:25 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:26 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:26 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:26 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:26 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:26.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:26 compute-1 ceph-mon[81728]: pgmap v972: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:26 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:27 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:27 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:27 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:27.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:27 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:27 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1529 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:16:28 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:28 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:28 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:28.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:29 compute-1 ceph-mon[81728]: pgmap v973: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:29 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:29 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:29 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:29 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:29.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:30 compute-1 ceph-mon[81728]: pgmap v974: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:30 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:30 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:30 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:16:30 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:30.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:16:31 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:31 compute-1 nova_compute[221338]: 2026-01-31 07:16:31.270 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:16:31 compute-1 nova_compute[221338]: 2026-01-31 07:16:31.271 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:16:31 compute-1 nova_compute[221338]: 2026-01-31 07:16:31.271 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:16:31 compute-1 nova_compute[221338]: 2026-01-31 07:16:31.292 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:16:31 compute-1 nova_compute[221338]: 2026-01-31 07:16:31.293 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:16:31 compute-1 nova_compute[221338]: 2026-01-31 07:16:31.293 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:16:31 compute-1 nova_compute[221338]: 2026-01-31 07:16:31.293 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:16:31 compute-1 nova_compute[221338]: 2026-01-31 07:16:31.293 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:16:31 compute-1 nova_compute[221338]: 2026-01-31 07:16:31.294 221342 DEBUG nova.compute.manager [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:16:31 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:31 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:31 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:31 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:31.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:31 compute-1 nova_compute[221338]: 2026-01-31 07:16:31.991 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:16:32 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:32 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:32 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:32.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:32 compute-1 ceph-mon[81728]: pgmap v975: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:32 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:32 compute-1 nova_compute[221338]: 2026-01-31 07:16:32.973 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:16:33 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:33 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1534 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:16:33 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:33 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:16:33 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:33.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:16:33 compute-1 nova_compute[221338]: 2026-01-31 07:16:33.974 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:16:34 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:34 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:16:34 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:34.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:16:34 compute-1 ceph-mon[81728]: pgmap v976: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:34 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:34 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:35 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:35 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:35 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:35.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:35 compute-1 sshd-session[224804]: Accepted publickey for zuul from 192.168.122.10 port 52822 ssh2: ECDSA SHA256:GNnH0XNpNRR6FRUv+R+7oah2/aUsb2by5T5amIaGgSM
Jan 31 07:16:35 compute-1 systemd-logind[788]: New session 51 of user zuul.
Jan 31 07:16:35 compute-1 systemd[1]: Started Session 51 of User zuul.
Jan 31 07:16:35 compute-1 sshd-session[224804]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:16:36 compute-1 sudo[224808]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 31 07:16:36 compute-1 sudo[224808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:36 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:36 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:36 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:36 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:16:36 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:36.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:16:36 compute-1 nova_compute[221338]: 2026-01-31 07:16:36.974 221342 DEBUG oslo_service.periodic_task [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:16:37 compute-1 nova_compute[221338]: 2026-01-31 07:16:37.393 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:16:37 compute-1 nova_compute[221338]: 2026-01-31 07:16:37.394 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:16:37 compute-1 nova_compute[221338]: 2026-01-31 07:16:37.394 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:16:37 compute-1 nova_compute[221338]: 2026-01-31 07:16:37.394 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:16:37 compute-1 nova_compute[221338]: 2026-01-31 07:16:37.394 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:16:37 compute-1 ceph-mon[81728]: pgmap v977: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:37 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:37 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/502285413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:16:37 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:37 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:16:37 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:37.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:16:37 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:16:37 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2446408640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:16:37 compute-1 nova_compute[221338]: 2026-01-31 07:16:37.894 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:16:38 compute-1 nova_compute[221338]: 2026-01-31 07:16:38.083 221342 WARNING nova.virt.libvirt.driver [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:16:38 compute-1 nova_compute[221338]: 2026-01-31 07:16:38.085 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5292MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:16:38 compute-1 nova_compute[221338]: 2026-01-31 07:16:38.085 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:16:38 compute-1 nova_compute[221338]: 2026-01-31 07:16:38.085 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:16:38 compute-1 nova_compute[221338]: 2026-01-31 07:16:38.188 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:16:38 compute-1 nova_compute[221338]: 2026-01-31 07:16:38.188 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:16:38 compute-1 nova_compute[221338]: 2026-01-31 07:16:38.216 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:16:38 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:38 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:16:38 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:38.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:16:38 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:16:38 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1369237533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:16:38 compute-1 nova_compute[221338]: 2026-01-31 07:16:38.654 221342 DEBUG oslo_concurrency.processutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:16:38 compute-1 nova_compute[221338]: 2026-01-31 07:16:38.658 221342 DEBUG nova.compute.provider_tree [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Inventory has not changed in ProviderTree for provider: 6c25628b-2484-4cb3-b051-815f7248948f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:16:38 compute-1 ceph-mon[81728]: pgmap v978: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:38 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:38 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1539 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:16:38 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2446408640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:16:38 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:38 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/3750996648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:16:38 compute-1 nova_compute[221338]: 2026-01-31 07:16:38.780 221342 DEBUG nova.scheduler.client.report [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Inventory has not changed for provider 6c25628b-2484-4cb3-b051-815f7248948f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:16:38 compute-1 nova_compute[221338]: 2026-01-31 07:16:38.782 221342 DEBUG nova.compute.resource_tracker [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:16:38 compute-1 nova_compute[221338]: 2026-01-31 07:16:38.783 221342 DEBUG oslo_concurrency.lockutils [None req-4af89ab9-f5c3-4bc3-aaca-24a003370c03 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:16:39 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 31 07:16:39 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3923681103' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 07:16:39 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:39 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:39 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:39.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:39 compute-1 ceph-mon[81728]: from='client.24697 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:39 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/1369237533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:16:39 compute-1 ceph-mon[81728]: from='client.14913 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:39 compute-1 ceph-mon[81728]: from='client.24680 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:39 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:39 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/3923681103' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 07:16:39 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/4234553437' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 07:16:40 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:40 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:16:40 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:40.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:16:41 compute-1 ceph-mon[81728]: pgmap v979: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:41 compute-1 ceph-mon[81728]: from='client.14919 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:41 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:41 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:41 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:41 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:41 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:41.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:42 compute-1 ceph-mon[81728]: from='client.24724 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:42 compute-1 ceph-mon[81728]: from='client.24730 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:42 compute-1 ceph-mon[81728]: pgmap v980: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:42 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/3092208665' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 07:16:42 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:42 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:42 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:42 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:42.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:43 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:43 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1544 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:16:43 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:43 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:16:43 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:43.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:16:44 compute-1 ceph-mon[81728]: pgmap v981: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:44 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:44 compute-1 ovs-vsctl[225178]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 31 07:16:44 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:44 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:44 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:44.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:45 compute-1 virtqemud[221400]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 31 07:16:45 compute-1 virtqemud[221400]: hostname: compute-1
Jan 31 07:16:45 compute-1 virtqemud[221400]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 31 07:16:45 compute-1 virtqemud[221400]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 31 07:16:45 compute-1 virtqemud[221400]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 07:16:45 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:45 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle asok_command: cache status {prefix=cache status} (starting...)
Jan 31 07:16:45 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle Can't run that command on an inactive MDS!
Jan 31 07:16:45 compute-1 lvm[225499]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:16:45 compute-1 lvm[225499]: VG ceph_vg0 finished
Jan 31 07:16:45 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle asok_command: client ls {prefix=client ls} (starting...)
Jan 31 07:16:45 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle Can't run that command on an inactive MDS!
Jan 31 07:16:45 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:45 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:45 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:45.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:46 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:46 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:46 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:16:46 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:46.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:16:46 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle asok_command: damage ls {prefix=damage ls} (starting...)
Jan 31 07:16:46 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle Can't run that command on an inactive MDS!
Jan 31 07:16:46 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 31 07:16:46 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2760291761' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 07:16:46 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle asok_command: dump loads {prefix=dump loads} (starting...)
Jan 31 07:16:46 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle Can't run that command on an inactive MDS!
Jan 31 07:16:46 compute-1 ceph-mon[81728]: pgmap v982: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:46 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:46 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2760291761' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 07:16:46 compute-1 ceph-mon[81728]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 07:16:46 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 31 07:16:46 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle Can't run that command on an inactive MDS!
Jan 31 07:16:46 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 31 07:16:46 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle Can't run that command on an inactive MDS!
Jan 31 07:16:46 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:16:46 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1878765236' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:16:46 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 31 07:16:46 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle Can't run that command on an inactive MDS!
Jan 31 07:16:46 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 31 07:16:46 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle Can't run that command on an inactive MDS!
Jan 31 07:16:47 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 31 07:16:47 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3378331011' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 07:16:47 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 31 07:16:47 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle Can't run that command on an inactive MDS!
Jan 31 07:16:47 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 31 07:16:47 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle Can't run that command on an inactive MDS!
Jan 31 07:16:47 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle asok_command: ops {prefix=ops} (starting...)
Jan 31 07:16:47 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle Can't run that command on an inactive MDS!
Jan 31 07:16:47 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:47 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:16:47 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:47.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:16:47 compute-1 ceph-mon[81728]: from='client.24701 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:47 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:47 compute-1 ceph-mon[81728]: from='client.24716 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:47 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/3609745067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:16:47 compute-1 ceph-mon[81728]: from='client.14934 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:47 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/1878765236' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:16:47 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/1012174330' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 07:16:47 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/3378331011' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 07:16:47 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/3486489742' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:16:47 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 31 07:16:47 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/215816537' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 07:16:47 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 31 07:16:47 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/443163303' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 07:16:48 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 31 07:16:48 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3335127190' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 07:16:48 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:48 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:48 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:48.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:48 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle asok_command: session ls {prefix=session ls} (starting...)
Jan 31 07:16:48 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle Can't run that command on an inactive MDS!
Jan 31 07:16:48 compute-1 podman[225829]: 2026-01-31 07:16:48.404737379 +0000 UTC m=+0.087921794 container health_status 671da109dd400e82bcd8053314772a0ed094aa90abf52ef452a215565c32bd06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Jan 31 07:16:48 compute-1 ceph-mds[84120]: mds.cephfs.compute-1.hhzmle asok_command: status {prefix=status} (starting...)
Jan 31 07:16:48 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 31 07:16:48 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/559056304' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 07:16:48 compute-1 ceph-mon[81728]: pgmap v983: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:48 compute-1 ceph-mon[81728]: from='client.14943 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:48 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:48 compute-1 ceph-mon[81728]: from='client.24743 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:48 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/379256453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:16:48 compute-1 ceph-mon[81728]: from='client.14973 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:48 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/215816537' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 07:16:48 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/443163303' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 07:16:48 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/6701442' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 07:16:48 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1549 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:16:48 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/3962392147' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 07:16:48 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:48 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/3335127190' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 07:16:48 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/4218391499' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 07:16:48 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/2020716181' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 07:16:48 compute-1 ceph-mon[81728]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 31 07:16:49 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2449006194' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 07:16:49 compute-1 sshd-session[225892]: Received disconnect from 45.148.10.151 port 10706:11:  [preauth]
Jan 31 07:16:49 compute-1 sshd-session[225892]: Disconnected from authenticating user root 45.148.10.151 port 10706 [preauth]
Jan 31 07:16:49 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 31 07:16:49 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/251179991' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 31 07:16:49 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2606329541' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 07:16:49 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3076572471' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 07:16:49 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:49 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:49 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:49.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:49 compute-1 ceph-mon[81728]: from='client.24784 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: from='client.24773 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: from='client.24799 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: from='client.24788 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: from='client.15012 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/559056304' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/2993630705' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/4032169188' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2449006194' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/4256007245' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/251179991' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:49 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/871985638' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/1954332218' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2606329541' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/3076572471' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/4141650263' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/1758497871' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 07:16:49 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/1334420105' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 07:16:50 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 31 07:16:50 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1172923731' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 07:16:50 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:50 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:50 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:50.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:50 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 31 07:16:50 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1787687598' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 07:16:50 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 31 07:16:50 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1701795184' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 07:16:50 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 31 07:16:50 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/17496953' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 07:16:51 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:51 compute-1 ceph-mon[81728]: pgmap v984: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:51 compute-1 ceph-mon[81728]: from='client.15024 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:51 compute-1 ceph-mon[81728]: from='client.24832 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:51 compute-1 ceph-mon[81728]: from='client.24845 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:51 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/1172923731' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 07:16:51 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/2772346024' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 07:16:51 compute-1 ceph-mon[81728]: from='client.24859 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:51 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/1343418582' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 07:16:51 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/3098962638' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 07:16:51 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:51 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/1787687598' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 07:16:51 compute-1 ceph-mon[81728]: from='client.24871 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:51 compute-1 ceph-mon[81728]: from='client.15072 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:51 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/1701795184' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 07:16:51 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/275701586' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 07:16:51 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/1702348069' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 07:16:51 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/17496953' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 07:16:51 compute-1 podman[226181]: 2026-01-31 07:16:51.195578256 +0000 UTC m=+0.111689322 container health_status 572fd483166548cd0acd6c998a091ed5449ef89630528d468e622e8d0ca097a1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c1b0edbf4bf7b545bb529b976e45f91f71c465cee30eb894f195d01691384cb8-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f-e616f493f93d1dc424f0ce34f7e388d4bd29696489dca5f94d75afff91d5a91f'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:16:51 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 31 07:16:51 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1459796621' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 07:16:51 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:51 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:16:51 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:51.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:16:51 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 31 07:16:51 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2259964109' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 07:16:52 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 31 07:16:52 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/867841681' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[8.8( v 43'8 lc 0'0 (0'0,43'8] local-lis/les=55/56 n=0 ec=51/42 lis/c=55/51 les/c/f=56/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'8 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[8.19( v 43'8 (0'0,43'8] local-lis/les=55/56 n=0 ec=51/42 lis/c=55/51 les/c/f=56/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'8 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.13( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 DELETING pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.012584 1 0.000072
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.13( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.015190 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.13( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.036202 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.1( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.039397 7 0.000683
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.1( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.1( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.f( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.040208 7 0.000174
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.f( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.f( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.10( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.041202 7 0.000141
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.12( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.041417 7 0.000154
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.10( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.10( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.11( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.041418 7 0.000152
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.12( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.11( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.11( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.1e( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.041316 7 0.000096
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.1e( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.1e( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.4( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.040687 7 0.000173
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.12( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.4( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.4( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.2( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 DELETING pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.019426 1 0.000043
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.2( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.022032 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.2( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.043010 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.13] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.8( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 DELETING pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.024048 1 0.000047
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.8( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.026574 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.8( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.048581 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.2] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.8] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.18( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 DELETING pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.030794 1 0.000035
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.18( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.032849 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.18( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.055825 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.18] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.19( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 DELETING pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038030 1 0.000033
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.19( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.040124 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.19( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.063115 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.19] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.5( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 DELETING pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.045189 1 0.000100
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.5( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.047378 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.5( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.070175 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.5] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.1b( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 DELETING pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.052596 1 0.000031
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.1b( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.054455 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.1b( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.078426 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.1b] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.15( v 54'51 (0'0,54'51] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.071770 3 0.001123
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.15( v 54'51 (0'0,54'51] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.072522 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.15( v 54'51 (0'0,54'51] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] enter Started/ToDelete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.15( v 54'51 (0'0,54'51] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.14( v 54'51 (0'0,54'51] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.135805 3 0.000258
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.14( v 54'51 (0'0,54'51] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.136017 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.14( v 54'51 (0'0,54'51] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] enter Started/ToDelete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.14( v 54'51 (0'0,54'51] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[8.8( v 43'8 (0'0,43'8] local-lis/les=55/56 n=0 ec=51/42 lis/c=55/51 les/c/f=56/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'8 mlcod 43'8 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.125129 1 0.000077
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[8.8( v 43'8 (0'0,43'8] local-lis/les=55/56 n=0 ec=51/42 lis/c=55/51 les/c/f=56/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'8 mlcod 43'8 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[8.8( v 43'8 (0'0,43'8] local-lis/les=55/56 n=0 ec=51/42 lis/c=55/51 les/c/f=56/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'8 mlcod 43'8 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[8.8( v 43'8 (0'0,43'8] local-lis/les=55/56 n=0 ec=51/42 lis/c=55/51 les/c/f=56/52/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=43'8 mlcod 43'8 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.1( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.118175 1 0.000044
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.1( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.1] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.f( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.118310 1 0.000017
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.f( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.f] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.10( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.118374 1 0.000018
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.10( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.10] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.11( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.118427 1 0.000045
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.11( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.11] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.1e( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.118570 1 0.000027
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.1e( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.1e] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.12( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.118660 1 0.000178
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.12( v 47'48 (0'0,47'48] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.12] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.4( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.118599 1 0.000126
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.4( v 47'48 (0'0,47'48] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.4] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.15( v 54'51 (0'0,54'51] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.065363 1 0.000095
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.15( v 54'51 (0'0,54'51] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.15] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.14( v 54'51 (0'0,54'51] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001952 1 0.000084
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.14( v 54'51 (0'0,54'51] local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.14] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.1( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 DELETING pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.007611 1 0.000094
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.1( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.125841 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.1( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.165501 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.1] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.f( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 DELETING pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.014718 1 0.000045
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.f( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.133067 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.f( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.173311 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.f] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.10( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 DELETING pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.021941 1 0.000028
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.10( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.140346 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.10( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.181586 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.10] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.11( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 DELETING pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.029187 1 0.000134
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.11( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.147699 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.11( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.189164 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.11] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.1e( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 DELETING pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.036552 1 0.000037
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.1e( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.155158 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.1e( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.196515 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.1e] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.12( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 DELETING pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.043855 1 0.000032
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.12( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.162550 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.12( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.204051 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.12] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.4( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 DELETING pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.051250 1 0.000026
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.4( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.169944 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.4( v 47'48 (0'0,47'48] lb MIN local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 crt=47'48 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.210725 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.4] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.15( v 54'51 (0'0,54'51] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 DELETING pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.075012 2 0.000138
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.15( v 54'51 (0'0,54'51] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.140428 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.15( v 54'51 (0'0,54'51] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] exit Started 1.233398 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.15] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.14( v 54'51 (0'0,54'51] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 DELETING pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.080897 2 0.000228
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.14( v 54'51 (0'0,54'51] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.082891 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.14( v 54'51 (0'0,54'51] lb MIN local-lis/les=53/54 n=0 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] exit Started 1.239410 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.14] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 62865408 unmapped: 958464 heap: 63823872 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.3( v 54'51 (0'0,54'51] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.709938 3 0.000021
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.3( v 54'51 (0'0,54'51] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.709995 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.3( v 54'51 (0'0,54'51] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] enter Started/ToDelete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.3( v 54'51 (0'0,54'51] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.3( v 54'51 (0'0,54'51] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000163 1 0.000121
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.3( v 54'51 (0'0,54'51] local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.3] failed. State was: not registered w/ OSD
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.3( v 54'51 (0'0,54'51] lb MIN local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 DELETING pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.011307 2 0.000290
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.3( v 54'51 (0'0,54'51] lb MIN local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.011519 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 56 pg[10.3( v 54'51 (0'0,54'51] lb MIN local-lis/les=53/54 n=1 ec=53/46 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=-1 lpr=55 pi=[53,55)/1 luod=0'0 crt=54'51 lcod 54'50 mlcod 0'0 active mbc={}] exit Started 1.743728 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[10.3] failed. State was: not registered w/ OSD
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 56 heartbeat osd_stat(store_statfs(0x1be0c0000/0x0/0x1bfc00000, data 0xb685d/0x10c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:50:46.649001+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 60 sent 58 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:16.167719+0000 osd.1 (osd.1) 59 : cluster [DBG] 3.1c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:16.177438+0000 osd.1 (osd.1) 60 : cluster [DBG] 3.1c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 60) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:16.167719+0000 osd.1 (osd.1) 59 : cluster [DBG] 3.1c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:16.177438+0000 osd.1 (osd.1) 60 : cluster [DBG] 3.1c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 56 handle_osd_map epochs [57,57], i have 56, src has [1,57]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 62898176 unmapped: 925696 heap: 63823872 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:50:47.649198+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 62 sent 60 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:17.188273+0000 osd.1 (osd.1) 61 : cluster [DBG] 6.19 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:17.198076+0000 osd.1 (osd.1) 62 : cluster [DBG] 6.19 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 62) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:17.188273+0000 osd.1 (osd.1) 61 : cluster [DBG] 6.19 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:17.198076+0000 osd.1 (osd.1) 62 : cluster [DBG] 6.19 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 62898176 unmapped: 925696 heap: 63823872 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:50:48.649556+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 57 handle_osd_map epochs [57,58], i have 57, src has [1,58]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 62889984 unmapped: 933888 heap: 63823872 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 58 heartbeat osd_stat(store_statfs(0x1be0ba000/0x0/0x1bfc00000, data 0xba353/0x112000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:50:49.649712+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 62955520 unmapped: 868352 heap: 63823872 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 475097 data_alloc: 285212672 data_used: 110592
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:50:50.649891+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 62955520 unmapped: 868352 heap: 63823872 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:50:51.650033+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.421001434s of 10.695325851s, submitted: 314
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 62980096 unmapped: 843776 heap: 63823872 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:50:52.650227+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 59 handle_osd_map epochs [59,60], i have 59, src has [1,60]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: handle_auth_request added challenge on 0x55c84041a400
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 63258624 unmapped: 565248 heap: 63823872 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:50:53.650418+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 60 handle_osd_map epochs [61,61], i have 60, src has [1,61]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _renew_subs
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 60 handle_osd_map epochs [61,61], i have 61, src has [1,61]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 61 handle_osd_map epochs [61,62], i have 61, src has [1,62]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 63315968 unmapped: 507904 heap: 63823872 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 62 ms_handle_reset con 0x55c84041a400 session 0x55c83f21ef00
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:50:54.650631+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 64 sent 62 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:24.136666+0000 osd.1 (osd.1) 63 : cluster [DBG] 5.1b scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:24.149813+0000 osd.1 (osd.1) 64 : cluster [DBG] 5.1b scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 62 heartbeat osd_stat(store_statfs(0x1be0ad000/0x0/0x1bfc00000, data 0xc1b03/0x11e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 64) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:24.136666+0000 osd.1 (osd.1) 63 : cluster [DBG] 5.1b scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:24.149813+0000 osd.1 (osd.1) 64 : cluster [DBG] 5.1b scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 63381504 unmapped: 442368 heap: 63823872 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 488965 data_alloc: 285212672 data_used: 114688
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: handle_auth_request added challenge on 0x55c83f8a7000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _renew_subs
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 62 handle_osd_map epochs [63,63], i have 62, src has [1,63]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:50:55.650938+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 63709184 unmapped: 114688 heap: 63823872 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _renew_subs
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:50:56.651137+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 64 ms_handle_reset con 0x55c83f8a7000 session 0x55c83f239680
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 63823872 unmapped: 0 heap: 63823872 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:50:57.651375+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 63840256 unmapped: 1032192 heap: 64872448 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 64 heartbeat osd_stat(store_statfs(0x1be0a8000/0x0/0x1bfc00000, data 0xc5454/0x124000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:50:58.651529+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 66 sent 64 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:28.070326+0000 osd.1 (osd.1) 65 : cluster [DBG] 5.1c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:28.080203+0000 osd.1 (osd.1) 66 : cluster [DBG] 5.1c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 63905792 unmapped: 966656 heap: 64872448 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 66) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:28.070326+0000 osd.1 (osd.1) 65 : cluster [DBG] 5.1c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:28.080203+0000 osd.1 (osd.1) 66 : cluster [DBG] 5.1c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:50:59.651843+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 63905792 unmapped: 966656 heap: 64872448 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 495385 data_alloc: 285212672 data_used: 114688
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 64 heartbeat osd_stat(store_statfs(0x1be0a8000/0x0/0x1bfc00000, data 0xc5454/0x124000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:00.652008+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 63922176 unmapped: 950272 heap: 64872448 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 64 handle_osd_map epochs [64,65], i have 64, src has [1,65]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:01.652263+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.16(unlocked)] enter Initial
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=0 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000081 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=0 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000031
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000244 1 0.000088
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.037418365s of 10.272022247s, submitted: 41
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000059 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000333 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.e(unlocked)] enter Initial
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=0 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000029 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=0 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000016
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000083 1 0.000040
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000121 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.6(unlocked)] enter Initial
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=0 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000045 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=0 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000019
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000072 1 0.000043
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000112 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.1e(unlocked)] enter Initial
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=0 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000030 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=0 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000012
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000050 1 0.000027
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000015 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000080 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 63930368 unmapped: 942080 heap: 64872448 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:02.652443+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 65 handle_osd_map epochs [66,66], i have 66, src has [1,66]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.677857 2 0.000101
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.678257 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.677696 2 0.000045
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.678317 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.677862 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.677920 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.677248 2 0.000035
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.677349 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.677366 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000144 1 0.000242
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000007 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000136 1 0.000234
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000008 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000072 1 0.000097
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000006 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.678017 2 0.000050
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.678172 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.678200 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=65) [1] r=0 lpr=65 pi=[51,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000133 1 0.000187
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000009 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 63922176 unmapped: 950272 heap: 64872448 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:03.652683+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 68 sent 66 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:33.121076+0000 osd.1 (osd.1) 67 : cluster [DBG] 4.1b scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:33.130894+0000 osd.1 (osd.1) 68 : cluster [DBG] 4.1b scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 68) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:33.121076+0000 osd.1 (osd.1) 67 : cluster [DBG] 4.1b scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:33.130894+0000 osd.1 (osd.1) 68 : cluster [DBG] 4.1b scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _renew_subs
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.1a deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.e( v 54'450 lc 0'0 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=54'450 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.321549 5 0.000073
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.e( v 54'450 lc 0'0 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=54'450 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.e( v 54'450 lc 0'0 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=54'450 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.1e( v 54'458 lc 0'0 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=54'458 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.321356 5 0.000271
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.1e( v 54'458 lc 0'0 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=54'458 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.16( v 52'438 lc 0'0 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=52'438 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.321632 5 0.000073
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.1e( v 54'458 lc 0'0 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=54'458 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.16( v 52'438 lc 0'0 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=52'438 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.16( v 52'438 lc 0'0 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=52'438 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.6( v 53'453 lc 0'0 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=53'453 mlcod 0'0 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.320959 5 0.000073
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.6( v 53'453 lc 0'0 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=53'453 mlcod 0'0 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.6( v 53'453 lc 0'0 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 crt=53'453 mlcod 0'0 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.1a deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.e] failed. State was: not registered w/ OSD
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: not registered w/ OSD
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.6] failed. State was: not registered w/ OSD
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.e( v 54'450 lc 50'150 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=54'450 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.012390 4 0.000219
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.e( v 54'450 lc 50'150 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=54'450 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: not registered w/ OSD
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.e( v 54'450 lc 50'150 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=54'450 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000068 1 0.000050
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.e( v 54'450 lc 50'150 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=54'450 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=54'450 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.045096 1 0.000033
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=54'450 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.6( v 53'453 lc 50'183 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=53'453 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.057620 4 0.000194
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.6( v 53'453 lc 50'183 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=53'453 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.6( v 53'453 lc 50'183 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=53'453 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000159 1 0.000102
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.6( v 53'453 lc 50'183 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=53'453 lcod 0'0 mlcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 63946752 unmapped: 1974272 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=53'453 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.062433 1 0.000106
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=53'453 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.16( v 52'438 lc 50'202 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=52'438 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.120691 4 0.000210
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.16( v 52'438 lc 50'202 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=52'438 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.16( v 52'438 lc 50'202 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=52'438 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000144 1 0.000098
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.16( v 52'438 lc 50'202 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=52'438 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.1e( v 54'458 lc 52'438 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=54'458 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.198300 4 0.000269
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=52'438 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.077394 1 0.000051
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.1e( v 54'458 lc 52'438 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=54'458 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=52'438 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.1e( v 54'458 lc 52'438 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=54'458 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000192 1 0.000114
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.1e( v 54'458 lc 52'438 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=54'458 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=54'458 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.054568 1 0.000080
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 67 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=54'458 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:04.652994+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 70 sent 68 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:34.093693+0000 osd.1 (osd.1) 69 : cluster [DBG] 4.1a deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:34.103995+0000 osd.1 (osd.1) 70 : cluster [DBG] 4.1a deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 70) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:34.093693+0000 osd.1 (osd.1) 69 : cluster [DBG] 4.1a deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:34.103995+0000 osd.1 (osd.1) 70 : cluster [DBG] 4.1a deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=52'438 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.501846 1 0.000103
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=52'438 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.700251 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=52'438 mlcod 0'0 active+remapped mbc={}] exit Started 2.021927 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=52'438 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=54'450 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.642614 1 0.000058
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=54'450 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.700313 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=54'450 mlcod 0'0 active+remapped mbc={}] exit Started 2.021909 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=54'450 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=53'453 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.579898 1 0.000063
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=53'453 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.700295 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=53'453 mlcod 0'0 active+remapped mbc={}] exit Started 2.021306 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 luod=0'0 crt=54'450 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=53'453 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 luod=0'0 crt=52'438 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 luod=0'0 crt=53'453 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 unknown mbc={}] exit Reset 0.000127 1 0.000175
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 unknown mbc={}] exit Reset 0.000056 1 0.000107
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=54'458 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.447160 1 0.000090
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=54'458 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.700445 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=54'458 mlcod 0'0 active+remapped mbc={}] exit Started 2.021854 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=66) [1]/[0] r=-1 lpr=66 pi=[51,66)/1 luod=0'0 crt=54'458 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 unknown mbc={}] exit Reset 0.000160 1 0.000240
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000050 1 0.000090
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 luod=0'0 crt=54'458 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000074
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000058 1 0.000147
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=0/0 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=0/0 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 unknown mbc={}] exit Reset 0.000248 1 0.000320
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000067 1 0.000087
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=0/0 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=32
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=27
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=32
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=27
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001685 3 0.000074
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001992 3 0.000116
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000011 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=31
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=31
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=24
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002093 3 0.000112
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=24
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002210 3 0.000128
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 68 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64036864 unmapped: 1884160 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 545425 data_alloc: 285212672 data_used: 122880
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 68 heartbeat osd_stat(store_statfs(0x1be09b000/0x0/0x1bfc00000, data 0xcca10/0x132000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:05.653238+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 68 handle_osd_map epochs [68,69], i have 68, src has [1,69]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 68 handle_osd_map epochs [68,69], i have 69, src has [1,69]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.014573 2 0.000075
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.016402 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.015017 2 0.000176
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.017208 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=66/67 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.015099 2 0.000184
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.017350 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=66/67 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=68/69 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=68/69 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.015690 2 0.000252
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.018051 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=66/67 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/51 les/c/f=69/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003784 3 0.000244
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/51 les/c/f=69/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/51 les/c/f=69/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/51 les/c/f=69/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'458 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=68/69 n=7 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=68/69 n=5 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=66/51 les/c/f=67/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/51 les/c/f=69/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006247 3 0.000397
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/51 les/c/f=69/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/51 les/c/f=69/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.6( v 53'453 (0'0,53'453] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/51 les/c/f=69/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=53'453 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/51 les/c/f=69/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005581 3 0.000152
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/51 les/c/f=69/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/51 les/c/f=69/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000042 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/51 les/c/f=69/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=52'438 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=68/69 n=5 ec=51/44 lis/c=68/51 les/c/f=69/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006549 3 0.000136
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=68/69 n=5 ec=51/44 lis/c=68/51 les/c/f=69/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=68/69 n=5 ec=51/44 lis/c=68/51 les/c/f=69/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 69 pg[9.e( v 54'450 (0'0,54'450] local-lis/les=68/69 n=5 ec=51/44 lis/c=68/51 les/c/f=69/52/0 sis=68) [1] r=0 lpr=68 pi=[51,68)/1 crt=54'450 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 69 handle_osd_map epochs [69,69], i have 69, src has [1,69]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64045056 unmapped: 1875968 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:06.653417+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64061440 unmapped: 1859584 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:07.653620+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 72 sent 70 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:37.098390+0000 osd.1 (osd.1) 71 : cluster [DBG] 4.c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:37.108216+0000 osd.1 (osd.1) 72 : cluster [DBG] 4.c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 72) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:37.098390+0000 osd.1 (osd.1) 71 : cluster [DBG] 4.c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:37.108216+0000 osd.1 (osd.1) 72 : cluster [DBG] 4.c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64069632 unmapped: 1851392 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:08.653848+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 69 heartbeat osd_stat(store_statfs(0x1be097000/0x0/0x1bfc00000, data 0xce641/0x135000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: handle_auth_request added challenge on 0x55c841a07400
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: handle_auth_request added challenge on 0x55c841a07800
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64126976 unmapped: 1794048 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: handle_auth_request added challenge on 0x55c8414ac400
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:09.654056+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64151552 unmapped: 1769472 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 553487 data_alloc: 285212672 data_used: 122880
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:10.654212+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 74 sent 72 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:40.043501+0000 osd.1 (osd.1) 73 : cluster [DBG] 6.e scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:40.055814+0000 osd.1 (osd.1) 74 : cluster [DBG] 6.e scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64167936 unmapped: 1753088 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:11.654511+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 4 last_log 76 sent 74 num 4 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:41.080611+0000 osd.1 (osd.1) 75 : cluster [DBG] 4.e scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:41.227935+0000 osd.1 (osd.1) 76 : cluster [DBG] 4.e scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 74) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:40.043501+0000 osd.1 (osd.1) 73 : cluster [DBG] 6.e scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:40.055814+0000 osd.1 (osd.1) 74 : cluster [DBG] 6.e scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64167936 unmapped: 1753088 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:12.654725+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 76) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:41.080611+0000 osd.1 (osd.1) 75 : cluster [DBG] 4.e scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:41.227935+0000 osd.1 (osd.1) 76 : cluster [DBG] 4.e scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 71 handle_osd_map epochs [72,73], i have 71, src has [1,73]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.110975266s of 10.966486931s, submitted: 75
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 73 handle_osd_map epochs [72,73], i have 73, src has [1,73]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.a(unlocked)] enter Initial
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=0 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000123 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=0 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000025 1 0.000054
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000246 1 0.000141
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000045 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000319 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.1a(unlocked)] enter Initial
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=0 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000060 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=0 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000016
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000067
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000022 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000138 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 73 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64200704 unmapped: 1720320 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 73 heartbeat osd_stat(store_statfs(0x1be08b000/0x0/0x1bfc00000, data 0xd5c1f/0x141000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:13.654936+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 73 handle_osd_map epochs [74,74], i have 74, src has [1,74]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.013837 2 0.000088
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.014207 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.013520 2 0.000082
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.014317 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.013701 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.013731 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=73) [1] r=0 lpr=73 pi=[51,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.a] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.a] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000154 1 0.000221
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000009 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000162 1 0.000219
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000006 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 74 pg[9.1a( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.a] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64241664 unmapped: 1679360 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:14.655076+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 75 pg[9.1a( v 53'445 lc 0'0 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=53'445 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.105470 6 0.000086
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 75 pg[9.1a( v 53'445 lc 0'0 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=53'445 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 75 pg[9.1a( v 53'445 lc 0'0 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=53'445 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 75 pg[9.a( v 54'466 lc 0'0 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=54'466 mlcod 0'0 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.106146 6 0.000078
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 75 pg[9.a( v 54'466 lc 0'0 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=54'466 mlcod 0'0 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 75 pg[9.a( v 54'466 lc 0'0 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 crt=54'466 mlcod 0'0 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 75 pg[9.1a( v 53'445 lc 50'337 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 luod=0'0 crt=53'445 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.010460 3 0.000230
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 75 pg[9.1a( v 53'445 lc 50'337 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 luod=0'0 crt=53'445 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 75 pg[9.1a( v 53'445 lc 50'337 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 luod=0'0 crt=53'445 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000081 1 0.000103
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 75 pg[9.1a( v 53'445 lc 50'337 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 luod=0'0 crt=53'445 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.a] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 75 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 luod=0'0 crt=53'445 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038707 1 0.000059
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 75 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 luod=0'0 crt=53'445 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 75 pg[9.a( v 54'466 lc 50'252 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 luod=0'0 crt=54'466 lcod 0'0 mlcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.048772 3 0.000511
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 75 pg[9.a( v 54'466 lc 50'252 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 luod=0'0 crt=54'466 lcod 0'0 mlcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 75 pg[9.a( v 54'466 lc 50'252 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 luod=0'0 crt=54'466 lcod 0'0 mlcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000097 1 0.000115
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 75 pg[9.a( v 54'466 lc 50'252 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 luod=0'0 crt=54'466 lcod 0'0 mlcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64421888 unmapped: 1499136 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 580771 data_alloc: 285212672 data_used: 122880
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 75 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 luod=0'0 crt=54'466 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.089228 1 0.000041
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 75 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 luod=0'0 crt=54'466 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:15.655203+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 75 heartbeat osd_stat(store_statfs(0x1be089000/0x0/0x1bfc00000, data 0xd78f6/0x144000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64487424 unmapped: 1433600 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 luod=0'0 crt=54'466 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.972519 1 0.000089
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 luod=0'0 crt=54'466 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.110826 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 luod=0'0 crt=54'466 mlcod 0'0 active+remapped mbc={}] exit Started 2.217040 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 luod=0'0 crt=54'466 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 luod=0'0 crt=54'466 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 luod=0'0 crt=53'445 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.061996 1 0.000085
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 luod=0'0 crt=53'445 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.111706 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 luod=0'0 crt=53'445 mlcod 0'0 active+remapped mbc={}] exit Started 2.217220 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=74) [1]/[0] r=-1 lpr=74 pi=[51,74)/1 luod=0'0 crt=53'445 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 unknown mbc={}] exit Reset 0.000251 1 0.000371
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000070 1 0.000240
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=0/0 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 luod=0'0 crt=53'445 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 unknown mbc={}] exit Reset 0.000592 1 0.000911
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 unknown mbc={}] exit Start 0.000138 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000070 1 0.000539
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=0/0 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=54
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=54
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001757 3 0.000098
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000020 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001098 3 0.000131
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 76 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:16.655389+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64552960 unmapped: 1368064 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 76 handle_osd_map epochs [76,77], i have 77, src has [1,77]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 77 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993125 2 0.000087
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 77 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.994389 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 77 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=74/75 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 77 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 77 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993698 2 0.000139
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 77 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995619 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 77 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=74/75 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 77 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=76/77 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 77 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 77 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=76/77 n=9 ec=51/44 lis/c=74/51 les/c/f=75/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 77 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/51 les/c/f=77/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015099 3 0.000179
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 77 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/51 les/c/f=77/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 77 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/51 les/c/f=77/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 77 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/51 les/c/f=77/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=53'445 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 77 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=76/77 n=9 ec=51/44 lis/c=76/51 les/c/f=77/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.014706 3 0.000157
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 77 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=76/77 n=9 ec=51/44 lis/c=76/51 les/c/f=77/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 77 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=76/77 n=9 ec=51/44 lis/c=76/51 les/c/f=77/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000020 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 77 pg[9.a( v 54'466 (0'0,54'466] local-lis/les=76/77 n=9 ec=51/44 lis/c=76/51 les/c/f=77/52/0 sis=76) [1] r=0 lpr=76 pi=[51,76)/1 crt=54'466 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:17.655607+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64659456 unmapped: 1261568 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:18.655961+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64667648 unmapped: 1253376 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 77 heartbeat osd_stat(store_statfs(0x1be07d000/0x0/0x1bfc00000, data 0xdcef3/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:19.656109+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64749568 unmapped: 1171456 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 600673 data_alloc: 285212672 data_used: 122880
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:20.656286+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 78 sent 76 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:50.126084+0000 osd.1 (osd.1) 77 : cluster [DBG] 6.d scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:50.138358+0000 osd.1 (osd.1) 78 : cluster [DBG] 6.d scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 78 heartbeat osd_stat(store_statfs(0x1be07d000/0x0/0x1bfc00000, data 0xdcef3/0x14f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64643072 unmapped: 1277952 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 78) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:50.126084+0000 osd.1 (osd.1) 77 : cluster [DBG] 6.d scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:50.138358+0000 osd.1 (osd.1) 78 : cluster [DBG] 6.d scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:21.656588+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 80 sent 78 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:51.078737+0000 osd.1 (osd.1) 79 : cluster [DBG] 5.1 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:51.088415+0000 osd.1 (osd.1) 80 : cluster [DBG] 5.1 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64651264 unmapped: 1269760 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:22.656808+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 4 last_log 82 sent 80 num 4 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:52.071370+0000 osd.1 (osd.1) 81 : cluster [DBG] 5.f scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:52.080982+0000 osd.1 (osd.1) 82 : cluster [DBG] 5.f scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 80) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:51.078737+0000 osd.1 (osd.1) 79 : cluster [DBG] 5.1 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:51.088415+0000 osd.1 (osd.1) 80 : cluster [DBG] 5.1 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.820221901s of 10.035409927s, submitted: 75
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64667648 unmapped: 1253376 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:23.657053+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 4 last_log 84 sent 82 num 4 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:53.101167+0000 osd.1 (osd.1) 83 : cluster [DBG] 6.2 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:53.110921+0000 osd.1 (osd.1) 84 : cluster [DBG] 6.2 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 82) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:52.071370+0000 osd.1 (osd.1) 81 : cluster [DBG] 5.f scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:52.080982+0000 osd.1 (osd.1) 82 : cluster [DBG] 5.f scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 84) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:53.101167+0000 osd.1 (osd.1) 83 : cluster [DBG] 6.2 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:53.110921+0000 osd.1 (osd.1) 84 : cluster [DBG] 6.2 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64684032 unmapped: 1236992 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:24.657272+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 86 sent 84 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:54.126879+0000 osd.1 (osd.1) 85 : cluster [DBG] 6.3 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:54.136604+0000 osd.1 (osd.1) 86 : cluster [DBG] 6.3 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 86) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:54.126879+0000 osd.1 (osd.1) 85 : cluster [DBG] 6.3 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:54.136604+0000 osd.1 (osd.1) 86 : cluster [DBG] 6.3 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 79 handle_osd_map epochs [79,80], i have 79, src has [1,80]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 79 handle_osd_map epochs [79,80], i have 80, src has [1,80]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.d(unlocked)] enter Initial
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=0 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000100 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=0 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000028 1 0.000049
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000244 1 0.000066
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000039 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000304 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.1d(unlocked)] enter Initial
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=0 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000087 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=0 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000038
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000318 1 0.000065
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000040 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000384 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 80 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 80 heartbeat osd_stat(store_statfs(0x1bced4000/0x0/0x1bfc00000, data 0xe2764/0x158000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64659456 unmapped: 1261568 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 620224 data_alloc: 285212672 data_used: 139264
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:25.657510+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 88 sent 86 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:55.145668+0000 osd.1 (osd.1) 87 : cluster [DBG] 6.5 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:55.155521+0000 osd.1 (osd.1) 88 : cluster [DBG] 6.5 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 88) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:55.145668+0000 osd.1 (osd.1) 87 : cluster [DBG] 6.5 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:55.155521+0000 osd.1 (osd.1) 88 : cluster [DBG] 6.5 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 80 handle_osd_map epochs [80,81], i have 80, src has [1,81]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 80 handle_osd_map epochs [80,81], i have 81, src has [1,81]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.030792 2 0.000078
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.031151 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.031185 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000150 1 0.000218
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000008 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.030495 2 0.000077
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.031027 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.031061 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=80) [1] r=0 lpr=80 pi=[63,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 81 handle_osd_map epochs [81,81], i have 81, src has [1,81]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000127 1 0.000283
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000030 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 81 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64684032 unmapped: 1236992 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:26.657726+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 64684032 unmapped: 1236992 heap: 65921024 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 82 pg[9.1d( v 54'449 lc 0'0 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=54'449 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.355914 5 0.000103
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 82 pg[9.1d( v 54'449 lc 0'0 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=54'449 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 82 pg[9.1d( v 54'449 lc 0'0 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=54'449 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 82 pg[9.d( v 54'465 lc 0'0 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=54'465 mlcod 0'0 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.356527 5 0.000093
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 82 pg[9.d( v 54'465 lc 0'0 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=54'465 mlcod 0'0 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 82 pg[9.d( v 54'465 lc 0'0 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 crt=54'465 mlcod 0'0 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.d] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 82 pg[9.d( v 54'465 lc 50'313 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 luod=0'0 crt=54'465 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.069482 4 0.000351
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 82 pg[9.d( v 54'465 lc 50'313 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 luod=0'0 crt=54'465 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 82 pg[9.d( v 54'465 lc 50'313 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 luod=0'0 crt=54'465 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000136 1 0.000054
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 82 pg[9.d( v 54'465 lc 50'313 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 luod=0'0 crt=54'465 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:27.657882+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 82 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 luod=0'0 crt=54'465 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.159877 1 0.000077
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 82 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 luod=0'0 crt=54'465 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 82 pg[9.1d( v 54'449 lc 50'427 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 luod=0'0 crt=54'449 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.229825 4 0.000250
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 82 pg[9.1d( v 54'449 lc 50'427 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 luod=0'0 crt=54'449 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 82 pg[9.1d( v 54'449 lc 50'427 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 luod=0'0 crt=54'449 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000128 1 0.000114
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 82 pg[9.1d( v 54'449 lc 50'427 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 luod=0'0 crt=54'449 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 82 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 luod=0'0 crt=54'449 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.079071 1 0.000068
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 82 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 luod=0'0 crt=54'449 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 luod=0'0 crt=54'449 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.497718 1 0.000083
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 luod=0'0 crt=54'449 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.806909 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 luod=0'0 crt=54'449 mlcod 0'0 active+remapped mbc={}] exit Started 2.162895 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 luod=0'0 crt=54'449 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 luod=0'0 crt=54'449 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 unknown mbc={}] exit Reset 0.000137 1 0.000195
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 luod=0'0 crt=54'465 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.577829 1 0.000079
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 luod=0'0 crt=54'465 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.807453 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 luod=0'0 crt=54'465 mlcod 0'0 active+remapped mbc={}] exit Started 2.164029 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[63,81)/1 luod=0'0 crt=54'465 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 luod=0'0 crt=54'465 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 unknown mbc={}] exit Reset 0.000059 1 0.000078
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.016570 2 0.000052
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=0/0 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.016010 2 0.000048
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=0/0 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 83 handle_osd_map epochs [83,83], i have 83, src has [1,83]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=34
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=34
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=81/82 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000688 2 0.000140
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=81/82 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=46
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=81/82 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000034 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=81/82 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=46
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=81/82 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000711 2 0.000152
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=81/82 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=81/82 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 83 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=81/82 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65077248 unmapped: 1892352 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 83 heartbeat osd_stat(store_statfs(0x1bcecb000/0x0/0x1bfc00000, data 0xe62f8/0x161000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:28.658003+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65077248 unmapped: 1892352 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=81/82 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.247517 2 0.000112
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=81/82 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.264913 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=81/82 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=81/82 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.248077 2 0.000085
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=81/82 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.264880 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=81/82 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=83/84 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/63 les/c/f=84/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013785 3 0.000335
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/63 les/c/f=84/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/63 les/c/f=84/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/63 les/c/f=84/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'449 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=83/84 n=9 ec=51/44 lis/c=81/63 les/c/f=82/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=83/84 n=9 ec=51/44 lis/c=83/63 les/c/f=84/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.041146 3 0.000566
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=83/84 n=9 ec=51/44 lis/c=83/63 les/c/f=84/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=83/84 n=9 ec=51/44 lis/c=83/63 les/c/f=84/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.d( v 54'465 (0'0,54'465] local-lis/les=83/84 n=9 ec=51/44 lis/c=83/63 les/c/f=84/64/0 sis=83) [1] r=0 lpr=83 pi=[63,83)/1 crt=54'465 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:29.658163+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 90 sent 88 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:59.067838+0000 osd.1 (osd.1) 89 : cluster [DBG] 5.2 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:51:59.077597+0000 osd.1 (osd.1) 90 : cluster [DBG] 5.2 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65085440 unmapped: 1884160 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 653012 data_alloc: 285212672 data_used: 147456
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 84 heartbeat osd_stat(store_statfs(0x1bcec6000/0x0/0x1bfc00000, data 0xe9cd6/0x167000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1f(unlocked)] enter Initial
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=0 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000130 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=0 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000029 1 0.000051
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000124 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000206 1 0.000207
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000045 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000288 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.f(unlocked)] enter Initial
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=0 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000084 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=0 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000027 1 0.000049
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000176 1 0.000080
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000046 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000253 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 84 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 90) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:59.067838+0000 osd.1 (osd.1) 89 : cluster [DBG] 5.2 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:51:59.077597+0000 osd.1 (osd.1) 90 : cluster [DBG] 5.2 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 84 handle_osd_map epochs [85,85], i have 85, src has [1,85]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.193238 2 0.000106
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.193584 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.193741 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000139 1 0.000189
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000008 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.193299 2 0.000099
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.193591 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.193617 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=84) [1] r=0 lpr=84 pi=[63,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000095 1 0.000143
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000006 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 85 handle_osd_map epochs [85,85], i have 85, src has [1,85]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:30.658415+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.10(unlocked)] enter Initial
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=0 lpr=0 pi=[51,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000077 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=0 lpr=0 pi=[51,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=0 lpr=85 pi=[51,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000024
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=0 lpr=85 pi=[51,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=0 lpr=85 pi=[51,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=0 lpr=85 pi=[51,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=0 lpr=85 pi=[51,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=0 lpr=85 pi=[51,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=0 lpr=85 pi=[51,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=0 lpr=85 pi=[51,85)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=0 lpr=85 pi=[51,85)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000188 1 0.000057
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=0 lpr=85 pi=[51,85)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=0 lpr=85 pi=[51,85)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000034 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=0 lpr=85 pi=[51,85)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000249 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 85 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=0 lpr=85 pi=[51,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65101824 unmapped: 1867776 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 85 handle_osd_map epochs [86,86], i have 86, src has [1,86]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=0 lpr=85 pi=[51,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.759712 2 0.000080
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=0 lpr=85 pi=[51,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.759999 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=0 lpr=85 pi=[51,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.760025 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=85) [1] r=0 lpr=85 pi=[51,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.10] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.10] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000092 1 0.000170
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000006 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.10] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.1f( v 54'454 lc 0'0 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=54'454 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.025049 6 0.000111
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.1f( v 54'454 lc 0'0 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=54'454 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.1f( v 54'454 lc 0'0 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=54'454 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.f( v 54'463 lc 0'0 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=54'463 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.024828 6 0.000104
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.f( v 54'463 lc 0'0 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=54'463 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.f( v 54'463 lc 0'0 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 crt=54'463 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: not registered w/ OSD
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.1f( v 54'454 lc 52'435 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 luod=0'0 crt=54'454 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.016246 3 0.000189
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.1f( v 54'454 lc 52'435 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 luod=0'0 crt=54'454 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.1f( v 54'454 lc 52'435 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 luod=0'0 crt=54'454 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000054 1 0.000107
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.1f( v 54'454 lc 52'435 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 luod=0'0 crt=54'454 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.f] failed. State was: not registered w/ OSD
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:31.658547+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 92 sent 90 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:01.074097+0000 osd.1 (osd.1) 91 : cluster [DBG] 5.7 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:01.084065+0000 osd.1 (osd.1) 92 : cluster [DBG] 5.7 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.f( v 54'463 lc 50'136 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 luod=0'0 crt=54'463 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.084024 3 0.000214
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.f( v 54'463 lc 50'136 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 luod=0'0 crt=54'463 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 luod=0'0 crt=54'454 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.068072 1 0.000067
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 luod=0'0 crt=54'454 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.f( v 54'463 lc 50'136 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 luod=0'0 crt=54'463 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000156 1 0.000059
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.f( v 54'463 lc 50'136 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 luod=0'0 crt=54'463 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65241088 unmapped: 1728512 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 luod=0'0 crt=54'463 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.670400 1 0.000061
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 86 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 luod=0'0 crt=54'463 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 92) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:01.074097+0000 osd.1 (osd.1) 91 : cluster [DBG] 5.7 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:01.084065+0000 osd.1 (osd.1) 92 : cluster [DBG] 5.7 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.11(unlocked)] enter Initial
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=0 lpr=0 pi=[51,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000118 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=0 lpr=0 pi=[51,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=0 lpr=87 pi=[51,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000026 1 0.000048
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=0 lpr=87 pi=[51,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=0 lpr=87 pi=[51,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=0 lpr=87 pi=[51,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=0 lpr=87 pi=[51,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=0 lpr=87 pi=[51,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=0 lpr=87 pi=[51,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=0 lpr=87 pi=[51,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=0 lpr=87 pi=[51,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000187 1 0.000058
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=0 lpr=87 pi=[51,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=0 lpr=87 pi=[51,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000067 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=0 lpr=87 pi=[51,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000373 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=0 lpr=87 pi=[51,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.10( v 52'436 lc 0'0 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 crt=52'436 mlcod 0'0 remapped NOTIFY m=2 mbc={}] exit Started/Stray 1.025612 5 0.000072
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.10( v 52'436 lc 0'0 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 crt=52'436 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.10( v 52'436 lc 0'0 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 crt=52'436 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 luod=0'0 crt=54'454 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.938244 1 0.000079
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 luod=0'0 crt=54'454 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.022778 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 luod=0'0 crt=54'454 mlcod 0'0 active+remapped mbc={}] exit Started 2.047862 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 luod=0'0 crt=54'454 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 luod=0'0 crt=54'454 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 unknown mbc={}] exit Reset 0.000084 1 0.000113
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 luod=0'0 crt=54'463 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.268590 1 0.000052
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 luod=0'0 crt=54'463 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.023310 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 luod=0'0 crt=54'463 mlcod 0'0 active+remapped mbc={}] exit Started 2.048190 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=85) [1]/[2] r=-1 lpr=85 pi=[63,85)/1 luod=0'0 crt=54'463 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 luod=0'0 crt=54'463 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 unknown mbc={}] exit Reset 0.000276 1 0.000340
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 unknown mbc={}] exit Start 0.000056 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.005244 2 0.000039
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 87 handle_osd_map epochs [87,87], i have 87, src has [1,87]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004823 2 0.000239
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=0/0 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=30
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=30
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=85/86 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001833 2 0.000158
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=85/86 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=85/86 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=85/86 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=42
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=42
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=85/86 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001091 2 0.000185
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=85/86 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=85/86 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=85/86 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 87 heartbeat osd_stat(store_statfs(0x1bcebe000/0x0/0x1bfc00000, data 0xed836/0x16f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.10] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.10( v 52'436 lc 50'341 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 luod=0'0 crt=52'436 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.015611 4 0.000270
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.10( v 52'436 lc 50'341 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 luod=0'0 crt=52'436 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.10( v 52'436 lc 50'341 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 luod=0'0 crt=52'436 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000088 1 0.000115
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.10( v 52'436 lc 50'341 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 luod=0'0 crt=52'436 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:32.658706+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 luod=0'0 crt=52'436 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.027033 1 0.000067
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 87 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 luod=0'0 crt=52'436 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65249280 unmapped: 1720320 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:33.658919+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.910488129s of 10.577212334s, submitted: 97
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 87 handle_osd_map epochs [87,88], i have 88, src has [1,88]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=85/86 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.037601 2 0.000133
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=85/86 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.044829 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=85/86 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=0 lpr=87 pi=[51,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.045387 2 0.000293
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=0 lpr=87 pi=[51,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.045888 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=0 lpr=87 pi=[51,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.045937 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=87) [1] r=0 lpr=87 pi=[51,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=85/86 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.037770 2 0.000204
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=85/86 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.043872 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=85/86 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=87/88 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000323 1 0.000399
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000006 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 luod=0'0 crt=52'436 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.003706 1 0.000054
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 luod=0'0 crt=52'436 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.046583 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 luod=0'0 crt=52'436 mlcod 0'0 active+remapped mbc={}] exit Started 2.072280 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[51,86)/1 luod=0'0 crt=52'436 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 luod=0'0 crt=52'436 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 unknown mbc={}] exit Reset 0.000099 1 0.000140
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 88 handle_osd_map epochs [87,88], i have 88, src has [1,88]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/63 les/c/f=88/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003130 4 0.000177
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/63 les/c/f=88/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/63 les/c/f=88/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/63 les/c/f=88/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'454 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=87/88 n=7 ec=51/44 lis/c=85/63 les/c/f=86/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=87/88 n=7 ec=51/44 lis/c=87/63 les/c/f=88/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006644 4 0.000115
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=87/88 n=7 ec=51/44 lis/c=87/63 les/c/f=88/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=87/88 n=7 ec=51/44 lis/c=87/63 les/c/f=88/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.f( v 54'463 (0'0,54'463] local-lis/les=87/88 n=7 ec=51/44 lis/c=87/63 les/c/f=88/64/0 sis=87) [1] r=0 lpr=87 pi=[63,87)/1 crt=54'463 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.022514 2 0.000052
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=0/0 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 88 handle_osd_map epochs [88,88], i have 88, src has [1,88]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=10
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=10
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000737 2 0.000125
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 88 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 88 heartbeat osd_stat(store_statfs(0x1bceba000/0x0/0x1bfc00000, data 0xef604/0x172000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 88 heartbeat osd_stat(store_statfs(0x1bceb6000/0x0/0x1bfc00000, data 0xf1284/0x175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65290240 unmapped: 1679360 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:34.659075+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 88 handle_osd_map epochs [89,89], i have 89, src has [1,89]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.12(unlocked)] enter Initial
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=0 lpr=0 pi=[51,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000084 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=0 lpr=0 pi=[51,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=0 lpr=89 pi=[51,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000034
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=0 lpr=89 pi=[51,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=0 lpr=89 pi=[51,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=0 lpr=89 pi=[51,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=0 lpr=89 pi=[51,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=0 lpr=89 pi=[51,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=0 lpr=89 pi=[51,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=0 lpr=89 pi=[51,89)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=0 lpr=89 pi=[51,89)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000216 1 0.000063
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=0 lpr=89 pi=[51,89)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=0 lpr=89 pi=[51,89)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000038 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=0 lpr=89 pi=[51,89)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000283 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=0 lpr=89 pi=[51,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.11( v 54'454 lc 0'0 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 crt=54'454 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.001339 5 0.000070
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.11( v 54'454 lc 0'0 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 crt=54'454 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.11( v 54'454 lc 0'0 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 crt=54'454 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.977055 2 0.000068
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000404 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=86/87 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=88/89 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=88/89 n=3 ec=51/44 lis/c=86/51 les/c/f=87/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=88/89 n=3 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006311 4 0.000230
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=88/89 n=3 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=88/89 n=3 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000021 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.10( v 52'436 (0'0,52'436] local-lis/les=88/89 n=3 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1] r=0 lpr=88 pi=[51,88)/1 crt=52'436 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.11] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.11( v 54'454 lc 50'416 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 luod=0'0 crt=54'454 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.007840 4 0.000157
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.11( v 54'454 lc 50'416 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 luod=0'0 crt=54'454 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.11( v 54'454 lc 50'416 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 luod=0'0 crt=54'454 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000146 1 0.000071
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.11( v 54'454 lc 50'416 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 luod=0'0 crt=54'454 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 luod=0'0 crt=54'454 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.044720 1 0.000048
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 89 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 luod=0'0 crt=54'454 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65363968 unmapped: 1605632 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 705176 data_alloc: 285212672 data_used: 147456
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:35.659226+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 89 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 luod=0'0 crt=54'454 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.085244 1 0.000111
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 luod=0'0 crt=54'454 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.138145 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 luod=0'0 crt=54'454 mlcod 0'0 active+remapped mbc={}] exit Started 2.139524 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=88) [1]/[0] r=-1 lpr=88 pi=[51,88)/1 luod=0'0 crt=54'454 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 luod=0'0 crt=54'454 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 unknown mbc={}] exit Reset 0.000146 1 0.000219
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=0 lpr=89 pi=[51,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.138591 2 0.000085
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=0 lpr=89 pi=[51,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.138912 0 0.000000
Jan 31 07:16:52 compute-1 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=0 lpr=89 pi=[51,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.138938 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=89) [1] r=0 lpr=89 pi=[51,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.12] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.12] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000053 1 0.000095
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000007 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.021987 2 0.000508
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.12] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=0/0 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 90 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 90 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=32
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=32
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001246 2 0.000168
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 90 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 90 heartbeat osd_stat(store_statfs(0x1bceb4000/0x0/0x1bfc00000, data 0xf3096/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65388544 unmapped: 1581056 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:36.659361+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 94 sent 92 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:06.149636+0000 osd.1 (osd.1) 93 : cluster [DBG] 4.5 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:06.159621+0000 osd.1 (osd.1) 94 : cluster [DBG] 4.5 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 90 handle_osd_map epochs [91,91], i have 91, src has [1,91]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 91 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990335 2 0.000124
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 91 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013689 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 91 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=88/89 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 91 pg[9.12( v 53'447 lc 0'0 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 crt=53'447 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.013922 5 0.000195
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 91 pg[9.12( v 53'447 lc 0'0 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 crt=53'447 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 91 pg[9.12( v 53'447 lc 0'0 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=51/51 les/c/f=52/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 crt=53'447 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 91 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=90/91 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 91 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=90/91 n=6 ec=51/44 lis/c=88/51 les/c/f=89/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 91 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=90/91 n=6 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002987 4 0.000216
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 91 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=90/91 n=6 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 91 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=90/91 n=6 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 91 pg[9.11( v 54'454 (0'0,54'454] local-lis/les=90/91 n=6 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1] r=0 lpr=90 pi=[51,90)/1 crt=54'454 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.12] failed. State was: not registered w/ OSD
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 91 pg[9.12( v 53'447 lc 50'431 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 luod=0'0 crt=53'447 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.006984 4 0.000202
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 91 pg[9.12( v 53'447 lc 50'431 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 luod=0'0 crt=53'447 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 91 pg[9.12( v 53'447 lc 50'431 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 luod=0'0 crt=53'447 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000111 1 0.000060
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 91 pg[9.12( v 53'447 lc 50'431 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 luod=0'0 crt=53'447 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 91 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 luod=0'0 crt=53'447 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.040097 1 0.000060
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 91 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 luod=0'0 crt=53'447 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 94) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:06.149636+0000 osd.1 (osd.1) 93 : cluster [DBG] 4.5 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:06.159621+0000 osd.1 (osd.1) 94 : cluster [DBG] 4.5 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65404928 unmapped: 1564672 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:37.659557+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 luod=0'0 crt=53'447 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.964543 1 0.000059
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 luod=0'0 crt=53'447 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.011919 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 luod=0'0 crt=53'447 mlcod 0'0 active+remapped mbc={}] exit Started 2.025891 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[51,90)/1 luod=0'0 crt=53'447 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 luod=0'0 crt=53'447 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 unknown mbc={}] exit Reset 0.000168 1 0.000261
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000071
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=0/0 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=24
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=24
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001319 3 0.000080
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 92 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65437696 unmapped: 1531904 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:38.659714+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 92 handle_osd_map epochs [93,93], i have 93, src has [1,93]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 93 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001344 2 0.000105
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 93 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002815 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 93 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=90/91 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 93 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=92/93 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 93 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=92/93 n=5 ec=51/44 lis/c=90/51 les/c/f=91/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 93 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=92/93 n=5 ec=51/44 lis/c=92/51 les/c/f=93/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004120 4 0.000210
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 93 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=92/93 n=5 ec=51/44 lis/c=92/51 les/c/f=93/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 93 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=92/93 n=5 ec=51/44 lis/c=92/51 les/c/f=93/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 93 pg[9.12( v 53'447 (0'0,53'447] local-lis/les=92/93 n=5 ec=51/44 lis/c=92/51 les/c/f=93/52/0 sis=92) [1] r=0 lpr=92 pi=[51,92)/1 crt=53'447 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65437696 unmapped: 1531904 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:39.659877+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65445888 unmapped: 1523712 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 724900 data_alloc: 285212672 data_used: 147456
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 93 heartbeat osd_stat(store_statfs(0x1bcea6000/0x0/0x1bfc00000, data 0xfa4f1/0x186000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:40.660003+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65454080 unmapped: 1515520 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:41.660139+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 94 pg[9.15(unlocked)] enter Initial
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=94) [1] r=0 lpr=0 pi=[63,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000090 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=94) [1] r=0 lpr=0 pi=[63,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=94) [1] r=0 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000041
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=94) [1] r=0 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=94) [1] r=0 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=94) [1] r=0 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=94) [1] r=0 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000028 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=94) [1] r=0 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=94) [1] r=0 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=94) [1] r=0 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=94) [1] r=0 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000234 1 0.000079
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=94) [1] r=0 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=94) [1] r=0 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000085 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=94) [1] r=0 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000357 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=94) [1] r=0 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65470464 unmapped: 1499136 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:42.660414+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=68) [1] r=0 lpr=68 crt=52'438 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 36.893158 79 0.000363
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=68) [1] r=0 lpr=68 crt=52'438 mlcod 0'0 active mbc={}] exit Started/Primary/Active 36.898892 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=68) [1] r=0 lpr=68 crt=52'438 mlcod 0'0 active mbc={}] exit Started/Primary 37.916978 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=94) [1] r=0 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.004632 2 0.000144
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=94) [1] r=0 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.005038 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=94) [1] r=0 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.005091 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=94) [1] r=0 lpr=94 pi=[63,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=68) [1] r=0 lpr=68 crt=52'438 mlcod 0'0 active mbc={}] exit Started 37.917091 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=68) [1] r=0 lpr=68 crt=52'438 mlcod 0'0 active mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000128 1 0.000191
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000007 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=95 pruub=11.106972694s) [2] r=-1 lpr=95 pi=[68,95)/1 crt=52'438 mlcod 0'0 active pruub 194.215652466s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=95 pruub=11.106878281s) [2] r=-1 lpr=95 pi=[68,95)/1 crt=52'438 mlcod 0'0 unknown NOTIFY pruub 194.215652466s@ mbc={}] exit Reset 0.000272 1 0.000431
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=95 pruub=11.106878281s) [2] r=-1 lpr=95 pi=[68,95)/1 crt=52'438 mlcod 0'0 unknown NOTIFY pruub 194.215652466s@ mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=95 pruub=11.106878281s) [2] r=-1 lpr=95 pi=[68,95)/1 crt=52'438 mlcod 0'0 unknown NOTIFY pruub 194.215652466s@ mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=95 pruub=11.106878281s) [2] r=-1 lpr=95 pi=[68,95)/1 crt=52'438 mlcod 0'0 unknown NOTIFY pruub 194.215652466s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=95 pruub=11.106878281s) [2] r=-1 lpr=95 pi=[68,95)/1 crt=52'438 mlcod 0'0 unknown NOTIFY pruub 194.215652466s@ mbc={}] exit Start 0.000034 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 95 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=95 pruub=11.106878281s) [2] r=-1 lpr=95 pi=[68,95)/1 crt=52'438 mlcod 0'0 unknown NOTIFY pruub 194.215652466s@ mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65503232 unmapped: 1466368 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:43.660733+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.787131310s of 10.065130234s, submitted: 63
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.15( v 54'444 lc 0'0 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 crt=54'444 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.036827 5 0.000067
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.15( v 54'444 lc 0'0 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 crt=54'444 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.15( v 54'444 lc 0'0 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=63/63 les/c/f=64/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 crt=54'444 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=95) [2] r=-1 lpr=95 pi=[68,95)/1 crt=52'438 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036676 3 0.000132
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=95) [2] r=-1 lpr=95 pi=[68,95)/1 crt=52'438 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.036757 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=95) [2] r=-1 lpr=95 pi=[68,95)/1 crt=52'438 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 remapped mbc={}] exit Reset 0.000412 1 0.000451
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 remapped mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 remapped mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 remapped mbc={}] exit Start 0.000041 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000131 2 0.000209
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000042 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.15] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.15( v 54'444 lc 50'387 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 luod=0'0 crt=54'444 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.008117 4 0.000193
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.15( v 54'444 lc 50'387 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 luod=0'0 crt=54'444 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.15( v 54'444 lc 50'387 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 luod=0'0 crt=54'444 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000115 1 0.000066
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.15( v 54'444 lc 50'387 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 luod=0'0 crt=54'444 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 luod=0'0 crt=54'444 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.053768 1 0.000041
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 96 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 luod=0'0 crt=54'444 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 96 handle_osd_map epochs [96,97], i have 97, src has [1,97]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.377909 3 0.000115
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.378215 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=68/69 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 luod=0'0 crt=54'444 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.316784 1 0.000076
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 luod=0'0 crt=54'444 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.378935 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 luod=0'0 crt=54'444 mlcod 0'0 active+remapped mbc={}] exit Started 1.415813 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[63,95)/1 luod=0'0 crt=54'444 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 luod=0'0 crt=54'444 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 unknown mbc={}] exit Reset 0.000103 1 0.000155
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 unknown mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 unknown mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001835 2 0.000056
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=0/0 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 97 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Jan 31 07:16:52 compute-1 ceph-osd[79145]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=95/96 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001462 2 0.000109
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=95/96 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=95/96 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=95/96 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.013699 5 0.000863
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000175 1 0.000058
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000761 1 0.000035
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 52'438 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.039157 2 0.000060
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 97 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 52'438 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65617920 unmapped: 1351680 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:44.660966+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 97 heartbeat osd_stat(store_statfs(0x1bce9d000/0x0/0x1bfc00000, data 0xffd0b/0x190000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 52'438 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.956489 1 0.000804
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 52'438 active+remapped mbc={255={}}] exit Started/Primary/Active 1.011091 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 52'438 active+remapped mbc={255={}}] exit Started/Primary 1.389363 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 52'438 active+remapped mbc={255={}}] exit Started 1.389504 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=95/96 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007548 2 0.000110
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[68,96)/1 crt=52'438 mlcod 52'438 active+remapped mbc={255={}}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=95/96 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010961 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=95/96 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=97/98 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=98 pruub=15.002883911s) [2] async=[2] r=-1 lpr=98 pi=[68,98)/1 crt=52'438 mlcod 52'438 active pruub 200.538467407s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=98 pruub=15.002782822s) [2] r=-1 lpr=98 pi=[68,98)/1 crt=52'438 mlcod 0'0 unknown NOTIFY pruub 200.538467407s@ mbc={}] exit Reset 0.000167 1 0.000278
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=98 pruub=15.002782822s) [2] r=-1 lpr=98 pi=[68,98)/1 crt=52'438 mlcod 0'0 unknown NOTIFY pruub 200.538467407s@ mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=98 pruub=15.002782822s) [2] r=-1 lpr=98 pi=[68,98)/1 crt=52'438 mlcod 0'0 unknown NOTIFY pruub 200.538467407s@ mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=98 pruub=15.002782822s) [2] r=-1 lpr=98 pi=[68,98)/1 crt=52'438 mlcod 0'0 unknown NOTIFY pruub 200.538467407s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=98 pruub=15.002782822s) [2] r=-1 lpr=98 pi=[68,98)/1 crt=52'438 mlcod 0'0 unknown NOTIFY pruub 200.538467407s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=98 pruub=15.002782822s) [2] r=-1 lpr=98 pi=[68,98)/1 crt=52'438 mlcod 0'0 unknown NOTIFY pruub 200.538467407s@ mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 98 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=97/98 n=4 ec=51/44 lis/c=95/63 les/c/f=96/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=97/98 n=4 ec=51/44 lis/c=97/63 les/c/f=98/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006028 4 0.000195
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=97/98 n=4 ec=51/44 lis/c=97/63 les/c/f=98/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=97/98 n=4 ec=51/44 lis/c=97/63 les/c/f=98/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 98 pg[9.15( v 54'444 (0'0,54'444] local-lis/les=97/98 n=4 ec=51/44 lis/c=97/63 les/c/f=98/64/0 sis=97) [1] r=0 lpr=97 pi=[63,97)/1 crt=54'444 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65667072 unmapped: 1302528 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 749464 data_alloc: 285212672 data_used: 151552
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:45.661126+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 96 sent 94 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:15.152835+0000 osd.1 (osd.1) 95 : cluster [DBG] 4.a deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:15.162685+0000 osd.1 (osd.1) 96 : cluster [DBG] 4.a deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 96) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:15.152835+0000 osd.1 (osd.1) 95 : cluster [DBG] 4.a deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:15.162685+0000 osd.1 (osd.1) 96 : cluster [DBG] 4.a deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 99 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=98) [2] r=-1 lpr=98 pi=[68,98)/1 crt=52'438 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.057929 7 0.000151
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 99 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=98) [2] r=-1 lpr=98 pi=[68,98)/1 crt=52'438 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 99 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=98) [2] r=-1 lpr=98 pi=[68,98)/1 crt=52'438 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 99 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=98) [2] r=-1 lpr=98 pi=[68,98)/1 crt=52'438 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000130 1 0.000067
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 99 pg[9.16( v 52'438 (0'0,52'438] local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=98) [2] r=-1 lpr=98 pi=[68,98)/1 crt=52'438 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: not registered w/ OSD
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 99 pg[9.16( v 52'438 (0'0,52'438] lb MIN local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=98) [2] r=-1 lpr=98 DELETING pi=[68,98)/1 crt=52'438 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.033700 2 0.000446
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 99 pg[9.16( v 52'438 (0'0,52'438] lb MIN local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=98) [2] r=-1 lpr=98 pi=[68,98)/1 crt=52'438 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.033909 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 99 pg[9.16( v 52'438 (0'0,52'438] lb MIN local-lis/les=96/97 n=3 ec=51/44 lis/c=96/68 les/c/f=97/69/0 sis=98) [2] r=-1 lpr=98 pi=[68,98)/1 crt=52'438 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.091913 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.16] failed. State was: not registered w/ OSD
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65699840 unmapped: 1269760 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:46.661420+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65724416 unmapped: 1245184 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:47.661649+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 99 heartbeat osd_stat(store_statfs(0x1bce92000/0x0/0x1bfc00000, data 0x10517b/0x198000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65740800 unmapped: 1228800 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:48.661930+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65757184 unmapped: 1212416 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:49.662101+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65781760 unmapped: 1187840 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 743466 data_alloc: 285212672 data_used: 147456
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:50.662333+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 98 sent 96 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:20.236150+0000 osd.1 (osd.1) 97 : cluster [DBG] 6.7 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:20.248995+0000 osd.1 (osd.1) 98 : cluster [DBG] 6.7 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 98) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:20.236150+0000 osd.1 (osd.1) 97 : cluster [DBG] 6.7 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:20.248995+0000 osd.1 (osd.1) 98 : cluster [DBG] 6.7 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65789952 unmapped: 1179648 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:51.662567+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 99 heartbeat osd_stat(store_statfs(0x1bca86000/0x0/0x1bfc00000, data 0x10517b/0x198000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 99 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65814528 unmapped: 1155072 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:52.662715+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 100 sent 98 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:22.317224+0000 osd.1 (osd.1) 99 : cluster [DBG] 4.d scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:22.327323+0000 osd.1 (osd.1) 100 : cluster [DBG] 4.d scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65822720 unmapped: 1146880 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 100) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:22.317224+0000 osd.1 (osd.1) 99 : cluster [DBG] 4.d scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:22.327323+0000 osd.1 (osd.1) 100 : cluster [DBG] 4.d scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:53.662986+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65830912 unmapped: 1138688 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:54.663150+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.326997757s of 11.627099037s, submitted: 56
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 65871872 unmapped: 1097728 heap: 66969600 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 752908 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:55.663333+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 102 sent 100 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:25.370845+0000 osd.1 (osd.1) 101 : cluster [DBG] 6.8 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:25.380528+0000 osd.1 (osd.1) 102 : cluster [DBG] 6.8 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 101 handle_osd_map epochs [102,103], i have 101, src has [1,103]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 102) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:25.370845+0000 osd.1 (osd.1) 101 : cluster [DBG] 6.8 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:25.380528+0000 osd.1 (osd.1) 102 : cluster [DBG] 6.8 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 66985984 unmapped: 1032192 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:56.663610+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 104 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=76) [1] r=0 lpr=76 crt=53'445 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 40.027850 83 0.000538
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 104 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=76) [1] r=0 lpr=76 crt=53'445 mlcod 0'0 active mbc={}] exit Started/Primary/Active 40.043104 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 104 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=76) [1] r=0 lpr=76 crt=53'445 mlcod 0'0 active mbc={}] exit Started/Primary 41.037629 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 104 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=76) [1] r=0 lpr=76 crt=53'445 mlcod 0'0 active mbc={}] exit Started 41.037881 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 104 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=76) [1] r=0 lpr=76 crt=53'445 mlcod 0'0 active mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 104 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104 pruub=15.973120689s) [0] r=-1 lpr=104 pi=[76,104)/1 crt=53'445 mlcod 0'0 active pruub 213.712402344s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 104 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104 pruub=15.972820282s) [0] r=-1 lpr=104 pi=[76,104)/1 crt=53'445 mlcod 0'0 unknown NOTIFY pruub 213.712402344s@ mbc={}] exit Reset 0.000370 1 0.000445
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 104 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104 pruub=15.972820282s) [0] r=-1 lpr=104 pi=[76,104)/1 crt=53'445 mlcod 0'0 unknown NOTIFY pruub 213.712402344s@ mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 104 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104 pruub=15.972820282s) [0] r=-1 lpr=104 pi=[76,104)/1 crt=53'445 mlcod 0'0 unknown NOTIFY pruub 213.712402344s@ mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 104 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104 pruub=15.972820282s) [0] r=-1 lpr=104 pi=[76,104)/1 crt=53'445 mlcod 0'0 unknown NOTIFY pruub 213.712402344s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 104 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104 pruub=15.972820282s) [0] r=-1 lpr=104 pi=[76,104)/1 crt=53'445 mlcod 0'0 unknown NOTIFY pruub 213.712402344s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 104 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104 pruub=15.972820282s) [0] r=-1 lpr=104 pi=[76,104)/1 crt=53'445 mlcod 0'0 unknown NOTIFY pruub 213.712402344s@ mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 66977792 unmapped: 1040384 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 104 heartbeat osd_stat(store_statfs(0x1bca75000/0x0/0x1bfc00000, data 0x10e3eb/0x1a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:57.663810+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67018752 unmapped: 999424 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 105 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=-1 lpr=104 pi=[76,104)/1 crt=53'445 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.189184 3 0.000165
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 105 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=-1 lpr=104 pi=[76,104)/1 crt=53'445 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.189261 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 105 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=104) [0] r=-1 lpr=104 pi=[76,104)/1 crt=53'445 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 105 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 105 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped mbc={}] exit Reset 0.000232 1 0.000294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 105 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 105 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 105 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 105 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped mbc={}] exit Start 0.000036 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 105 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 105 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 105 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 105 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000061 1 0.000139
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 105 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 105 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000039 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 105 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 105 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 105 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:58.664020+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 105 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 106 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.599012 4 0.000130
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 106 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.599210 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 106 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=76/77 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 106 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 106 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=76/76 les/c/f=77/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 106 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.006975 5 0.000331
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 106 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 106 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000149 1 0.000050
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 106 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 106 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000708 1 0.000061
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 106 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 106 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 53'445 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.030007 2 0.000047
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 106 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 53'445 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67125248 unmapped: 892928 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:51:59.664215+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 106 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _renew_subs
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67141632 unmapped: 876544 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 772256 data_alloc: 285212672 data_used: 159744
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 106 heartbeat osd_stat(store_statfs(0x1bca6b000/0x0/0x1bfc00000, data 0x113aa8/0x1b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 53'445 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 1.426305 1 0.000152
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 53'445 active+remapped mbc={255={}}] exit Started/Primary/Active 1.464467 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 53'445 active+remapped mbc={255={}}] exit Started/Primary 2.063716 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 53'445 active+remapped mbc={255={}}] exit Started 2.063810 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=105) [0]/[1] async=[0] r=0 lpr=105 pi=[76,105)/1 crt=53'445 mlcod 53'445 active+remapped mbc={255={}}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107 pruub=14.542356491s) [0] async=[0] r=-1 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 53'445 active pruub 215.535385132s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107 pruub=14.541945457s) [0] r=-1 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown NOTIFY pruub 215.535385132s@ mbc={}] exit Reset 0.000482 1 0.000612
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107 pruub=14.541945457s) [0] r=-1 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown NOTIFY pruub 215.535385132s@ mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107 pruub=14.541945457s) [0] r=-1 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown NOTIFY pruub 215.535385132s@ mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107 pruub=14.541945457s) [0] r=-1 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown NOTIFY pruub 215.535385132s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107 pruub=14.541945457s) [0] r=-1 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown NOTIFY pruub 215.535385132s@ mbc={}] exit Start 0.000024 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 107 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107 pruub=14.541945457s) [0] r=-1 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown NOTIFY pruub 215.535385132s@ mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:00.664391+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 108 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=-1 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.745810 7 0.000247
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 108 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=-1 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 108 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=-1 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 108 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=-1 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000161 1 0.000175
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 108 pg[9.1a( v 53'445 (0'0,53'445] local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=-1 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 108 pg[9.1a( v 53'445 (0'0,53'445] lb MIN local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=-1 lpr=107 DELETING pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.035690 2 0.000413
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 108 pg[9.1a( v 53'445 (0'0,53'445] lb MIN local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=-1 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.035996 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 108 pg[9.1a( v 53'445 (0'0,53'445] lb MIN local-lis/les=105/106 n=4 ec=51/44 lis/c=105/76 les/c/f=106/77/0 sis=107) [0] r=-1 lpr=107 pi=[76,107)/1 crt=53'445 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.781954 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1a] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67158016 unmapped: 860160 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:01.664540+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 108 handle_osd_map epochs [108,109], i have 108, src has [1,109]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67239936 unmapped: 778240 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:02.664693+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67239936 unmapped: 778240 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:03.664882+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67248128 unmapped: 770048 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 109 heartbeat osd_stat(store_statfs(0x1bca68000/0x0/0x1bfc00000, data 0x1172b9/0x1b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:04.665048+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 104 sent 102 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:34.312773+0000 osd.1 (osd.1) 103 : cluster [DBG] 3.d scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:34.322726+0000 osd.1 (osd.1) 104 : cluster [DBG] 3.d scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 109 handle_osd_map epochs [109,110], i have 109, src has [1,110]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 104) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:34.312773+0000 osd.1 (osd.1) 103 : cluster [DBG] 3.d scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:34.322726+0000 osd.1 (osd.1) 104 : cluster [DBG] 3.d scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67297280 unmapped: 720896 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 773268 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:05.665361+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67305472 unmapped: 712704 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:06.665563+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.806421280s of 11.933482170s, submitted: 37
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 111 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=83) [1] r=0 lpr=83 crt=54'449 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 37.759632 82 0.000340
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 111 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=83) [1] r=0 lpr=83 crt=54'449 mlcod 0'0 active mbc={}] exit Started/Primary/Active 37.773544 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 111 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=83) [1] r=0 lpr=83 crt=54'449 mlcod 0'0 active mbc={}] exit Started/Primary 39.038479 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 111 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=83) [1] r=0 lpr=83 crt=54'449 mlcod 0'0 active mbc={}] exit Started 39.038510 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 111 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=83) [1] r=0 lpr=83 crt=54'449 mlcod 0'0 active mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 111 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=111 pruub=10.240897179s) [2] r=-1 lpr=111 pi=[83,111)/1 crt=54'449 mlcod 0'0 active pruub 217.942642212s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 111 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=111 pruub=10.240834236s) [2] r=-1 lpr=111 pi=[83,111)/1 crt=54'449 mlcod 0'0 unknown NOTIFY pruub 217.942642212s@ mbc={}] exit Reset 0.000100 1 0.000163
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 111 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=111 pruub=10.240834236s) [2] r=-1 lpr=111 pi=[83,111)/1 crt=54'449 mlcod 0'0 unknown NOTIFY pruub 217.942642212s@ mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 111 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=111 pruub=10.240834236s) [2] r=-1 lpr=111 pi=[83,111)/1 crt=54'449 mlcod 0'0 unknown NOTIFY pruub 217.942642212s@ mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 111 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=111 pruub=10.240834236s) [2] r=-1 lpr=111 pi=[83,111)/1 crt=54'449 mlcod 0'0 unknown NOTIFY pruub 217.942642212s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 111 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=111 pruub=10.240834236s) [2] r=-1 lpr=111 pi=[83,111)/1 crt=54'449 mlcod 0'0 unknown NOTIFY pruub 217.942642212s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 111 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=111 pruub=10.240834236s) [2] r=-1 lpr=111 pi=[83,111)/1 crt=54'449 mlcod 0'0 unknown NOTIFY pruub 217.942642212s@ mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 111 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67330048 unmapped: 688128 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:07.665768+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 106 sent 104 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:37.383123+0000 osd.1 (osd.1) 105 : cluster [DBG] 3.c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:37.392721+0000 osd.1 (osd.1) 106 : cluster [DBG] 3.c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 106) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:37.383123+0000 osd.1 (osd.1) 105 : cluster [DBG] 3.c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:37.392721+0000 osd.1 (osd.1) 106 : cluster [DBG] 3.c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 112 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=111) [2] r=-1 lpr=111 pi=[83,111)/1 crt=54'449 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.000835 3 0.000114
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 112 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=111) [2] r=-1 lpr=111 pi=[83,111)/1 crt=54'449 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.000913 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 112 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=111) [2] r=-1 lpr=111 pi=[83,111)/1 crt=54'449 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 112 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 112 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 remapped mbc={}] exit Reset 0.000129 1 0.000205
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 112 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 remapped mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 112 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 remapped mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 112 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 112 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 112 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 112 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 112 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 112 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000052
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 112 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 112 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000046 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 112 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 112 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 112 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67354624 unmapped: 663552 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:08.666011+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 112 heartbeat osd_stat(store_statfs(0x1bca5f000/0x0/0x1bfc00000, data 0x11ca12/0x1be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67362816 unmapped: 655360 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.344855 4 0.000107
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.345033 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=83/84 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=68) [1] r=0 lpr=68 crt=54'458 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 63.835948 134 0.000620
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=68) [1] r=0 lpr=68 crt=54'458 mlcod 0'0 active mbc={}] exit Started/Primary/Active 63.839851 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=68) [1] r=0 lpr=68 crt=54'458 mlcod 0'0 active mbc={}] exit Started/Primary 64.856281 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=68) [1] r=0 lpr=68 crt=54'458 mlcod 0'0 active mbc={}] exit Started 64.856314 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=68) [1] r=0 lpr=68 crt=54'458 mlcod 0'0 active mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113 pruub=8.165588379s) [0] r=-1 lpr=113 pi=[68,113)/1 crt=54'458 mlcod 0'0 active pruub 218.213668823s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113 pruub=8.165482521s) [0] r=-1 lpr=113 pi=[68,113)/1 crt=54'458 mlcod 0'0 unknown NOTIFY pruub 218.213668823s@ mbc={}] exit Reset 0.000207 1 0.000263
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113 pruub=8.165482521s) [0] r=-1 lpr=113 pi=[68,113)/1 crt=54'458 mlcod 0'0 unknown NOTIFY pruub 218.213668823s@ mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113 pruub=8.165482521s) [0] r=-1 lpr=113 pi=[68,113)/1 crt=54'458 mlcod 0'0 unknown NOTIFY pruub 218.213668823s@ mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113 pruub=8.165482521s) [0] r=-1 lpr=113 pi=[68,113)/1 crt=54'458 mlcod 0'0 unknown NOTIFY pruub 218.213668823s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113 pruub=8.165482521s) [0] r=-1 lpr=113 pi=[68,113)/1 crt=54'458 mlcod 0'0 unknown NOTIFY pruub 218.213668823s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113 pruub=8.165482521s) [0] r=-1 lpr=113 pi=[68,113)/1 crt=54'458 mlcod 0'0 unknown NOTIFY pruub 218.213668823s@ mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:09.666139+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=83/83 les/c/f=84/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.047398 5 0.000602
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000113 1 0.000072
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000592 1 0.000106
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 54'449 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.269148 2 0.000077
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 113 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 54'449 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67452928 unmapped: 565248 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 784329 data_alloc: 285212672 data_used: 163840
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:10.666308+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 113 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 113 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=-1 lpr=113 pi=[68,113)/1 crt=54'458 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.329449 3 0.000126
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=-1 lpr=113 pi=[68,113)/1 crt=54'458 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.329543 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=113) [0] r=-1 lpr=113 pi=[68,113)/1 crt=54'458 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped mbc={}] exit Reset 0.000124 1 0.000205
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000060 1 0.000072
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000040 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 54'449 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 1.012981 1 0.000182
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 54'449 active+remapped mbc={255={}}] exit Started/Primary/Active 1.330620 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=87) [1] r=0 lpr=87 crt=54'454 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 37.300296 78 0.000330
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 54'449 active+remapped mbc={255={}}] exit Started/Primary 2.675681 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=87) [1] r=0 lpr=87 crt=54'454 mlcod 0'0 active mbc={}] exit Started/Primary/Active 37.303506 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 54'449 active+remapped mbc={255={}}] exit Started 2.675714 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=112) [2]/[1] async=[2] r=0 lpr=112 pi=[83,112)/1 crt=54'449 mlcod 54'449 active+remapped mbc={255={}}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=87) [1] r=0 lpr=87 crt=54'454 mlcod 0'0 active mbc={}] exit Started/Primary 38.348372 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=87) [1] r=0 lpr=87 crt=54'454 mlcod 0'0 active mbc={}] exit Started 38.348428 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=87) [1] r=0 lpr=87 crt=54'454 mlcod 0'0 active mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=114 pruub=14.716454506s) [2] async=[2] r=-1 lpr=114 pi=[83,114)/1 crt=54'449 mlcod 54'449 active pruub 226.095108032s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114 pruub=10.700444221s) [0] r=-1 lpr=114 pi=[87,114)/1 crt=54'454 mlcod 0'0 active pruub 222.079086304s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114 pruub=10.700316429s) [0] r=-1 lpr=114 pi=[87,114)/1 crt=54'454 mlcod 0'0 unknown NOTIFY pruub 222.079086304s@ mbc={}] exit Reset 0.000152 1 0.000237
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114 pruub=10.700316429s) [0] r=-1 lpr=114 pi=[87,114)/1 crt=54'454 mlcod 0'0 unknown NOTIFY pruub 222.079086304s@ mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114 pruub=10.700316429s) [0] r=-1 lpr=114 pi=[87,114)/1 crt=54'454 mlcod 0'0 unknown NOTIFY pruub 222.079086304s@ mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=114 pruub=14.716309547s) [2] r=-1 lpr=114 pi=[83,114)/1 crt=54'449 mlcod 0'0 unknown NOTIFY pruub 226.095108032s@ mbc={}] exit Reset 0.000210 1 0.000304
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114 pruub=10.700316429s) [0] r=-1 lpr=114 pi=[87,114)/1 crt=54'454 mlcod 0'0 unknown NOTIFY pruub 222.079086304s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=114 pruub=14.716309547s) [2] r=-1 lpr=114 pi=[83,114)/1 crt=54'449 mlcod 0'0 unknown NOTIFY pruub 226.095108032s@ mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114 pruub=10.700316429s) [0] r=-1 lpr=114 pi=[87,114)/1 crt=54'454 mlcod 0'0 unknown NOTIFY pruub 222.079086304s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=114 pruub=14.716309547s) [2] r=-1 lpr=114 pi=[83,114)/1 crt=54'449 mlcod 0'0 unknown NOTIFY pruub 226.095108032s@ mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114 pruub=10.700316429s) [0] r=-1 lpr=114 pi=[87,114)/1 crt=54'454 mlcod 0'0 unknown NOTIFY pruub 222.079086304s@ mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=114 pruub=14.716309547s) [2] r=-1 lpr=114 pi=[83,114)/1 crt=54'449 mlcod 0'0 unknown NOTIFY pruub 226.095108032s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=114 pruub=14.716309547s) [2] r=-1 lpr=114 pi=[83,114)/1 crt=54'449 mlcod 0'0 unknown NOTIFY pruub 226.095108032s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 114 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=114 pruub=14.716309547s) [2] r=-1 lpr=114 pi=[83,114)/1 crt=54'449 mlcod 0'0 unknown NOTIFY pruub 226.095108032s@ mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 114 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67411968 unmapped: 606208 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:11.666483+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 114 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 114 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=-1 lpr=114 pi=[87,114)/1 crt=54'454 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.985906 3 0.000066
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=-1 lpr=114 pi=[87,114)/1 crt=54'454 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.985967 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=114) [0] r=-1 lpr=114 pi=[87,114)/1 crt=54'454 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.986698 4 0.000115
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.986877 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=68/69 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped mbc={}] exit Reset 0.000106 1 0.000148
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped mbc={}] exit Start 0.000009 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000113 1 0.000086
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000044 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=68/68 les/c/f=69/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.008068 5 0.000535
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000150 1 0.000122
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000634 1 0.000070
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 54'458 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.037567 2 0.000064
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 54'458 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=114) [2] r=-1 lpr=114 pi=[83,114)/1 crt=54'449 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.069980 7 0.000290
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=114) [2] r=-1 lpr=114 pi=[83,114)/1 crt=54'449 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=114) [2] r=-1 lpr=114 pi=[83,114)/1 crt=54'449 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=114) [2] r=-1 lpr=114 pi=[83,114)/1 crt=54'449 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000147 1 0.000129
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1d( v 54'449 (0'0,54'449] local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=114) [2] r=-1 lpr=114 pi=[83,114)/1 crt=54'449 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1d( v 54'449 (0'0,54'449] lb MIN local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=114) [2] r=-1 lpr=114 DELETING pi=[83,114)/1 crt=54'449 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.115193 2 0.000400
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1d( v 54'449 (0'0,54'449] lb MIN local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=114) [2] r=-1 lpr=114 pi=[83,114)/1 crt=54'449 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.115463 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 115 pg[9.1d( v 54'449 (0'0,54'449] lb MIN local-lis/les=112/113 n=4 ec=51/44 lis/c=112/83 les/c/f=113/84/0 sis=114) [2] r=-1 lpr=114 pi=[83,114)/1 crt=54'449 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.185557 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1d] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67452928 unmapped: 565248 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:12.666626+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 108 sent 106 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:42.372938+0000 osd.1 (osd.1) 107 : cluster [DBG] 5.9 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:42.382661+0000 osd.1 (osd.1) 108 : cluster [DBG] 5.9 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 115 handle_osd_map epochs [115,116], i have 115, src has [1,116]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 115 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 54'458 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 1.075553 1 0.000683
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 54'458 active+remapped mbc={255={}}] exit Started/Primary/Active 1.122389 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 54'458 active+remapped mbc={255={}}] exit Started/Primary 2.109290 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 54'458 active+remapped mbc={255={}}] exit Started 2.109343 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=114) [0]/[1] async=[0] r=0 lpr=114 pi=[68,114)/1 crt=54'458 mlcod 54'458 active+remapped mbc={255={}}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116 pruub=14.884787560s) [0] async=[0] r=-1 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 54'458 active pruub 228.372085571s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122128 4 0.000229
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.122487 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=87/88 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116 pruub=14.884650230s) [0] r=-1 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown NOTIFY pruub 228.372085571s@ mbc={}] exit Reset 0.000198 1 0.000299
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116 pruub=14.884650230s) [0] r=-1 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown NOTIFY pruub 228.372085571s@ mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116 pruub=14.884650230s) [0] r=-1 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown NOTIFY pruub 228.372085571s@ mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116 pruub=14.884650230s) [0] r=-1 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown NOTIFY pruub 228.372085571s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116 pruub=14.884650230s) [0] r=-1 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown NOTIFY pruub 228.372085571s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116 pruub=14.884650230s) [0] r=-1 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown NOTIFY pruub 228.372085571s@ mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 116 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 108) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:42.372938+0000 osd.1 (osd.1) 107 : cluster [DBG] 5.9 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:42.382661+0000 osd.1 (osd.1) 108 : cluster [DBG] 5.9 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=87/87 les/c/f=88/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.075365 5 0.000938
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000186 1 0.000201
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000794 1 0.000072
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 54'454 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.040841 2 0.000132
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 116 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 54'454 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67477504 unmapped: 540672 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:13.666825+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 116 handle_osd_map epochs [117,117], i have 116, src has [1,117]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 116 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 54'454 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.806742 1 0.000139
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 54'454 active+remapped mbc={255={}}] exit Started/Primary/Active 0.924358 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 54'454 active+remapped mbc={255={}}] exit Started/Primary 2.046867 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 54'454 active+remapped mbc={255={}}] exit Started 2.046926 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=115) [0]/[1] async=[0] r=0 lpr=115 pi=[87,115)/1 crt=54'454 mlcod 54'454 active+remapped mbc={255={}}] enter Reset
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117 pruub=15.150360107s) [0] async=[0] r=-1 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 54'454 active pruub 229.562225342s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117 pruub=15.150243759s) [0] r=-1 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown NOTIFY pruub 229.562225342s@ mbc={}] exit Reset 0.000166 1 0.000236
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117 pruub=15.150243759s) [0] r=-1 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown NOTIFY pruub 229.562225342s@ mbc={}] enter Started
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117 pruub=15.150243759s) [0] r=-1 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown NOTIFY pruub 229.562225342s@ mbc={}] enter Start
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117 pruub=15.150243759s) [0] r=-1 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown NOTIFY pruub 229.562225342s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117 pruub=15.150243759s) [0] r=-1 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown NOTIFY pruub 229.562225342s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117 pruub=15.150243759s) [0] r=-1 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown NOTIFY pruub 229.562225342s@ mbc={}] enter Started/Stray
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=-1 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.924407 6 0.000272
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=-1 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=-1 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=-1 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000937 2 0.000605
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1e( v 54'458 (0'0,54'458] local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=-1 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 117 heartbeat osd_stat(store_statfs(0x1bca53000/0x0/0x1bfc00000, data 0x123d8f/0x1c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1e( v 54'458 (0'0,54'458] lb MIN local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=-1 lpr=116 DELETING pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.068687 2 0.000273
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1e( v 54'458 (0'0,54'458] lb MIN local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=-1 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.069713 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 117 pg[9.1e( v 54'458 (0'0,54'458] lb MIN local-lis/les=114/115 n=7 ec=51/44 lis/c=114/68 les/c/f=115/69/0 sis=116) [0] r=-1 lpr=116 pi=[68,116)/1 crt=54'458 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.994718 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1e] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67502080 unmapped: 516096 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:14.666968+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 118 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=-1 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.241232 7 0.000167
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 118 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=-1 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 118 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=-1 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 118 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=-1 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000086 1 0.000092
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 118 pg[9.1f( v 54'454 (0'0,54'454] local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=-1 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 118 pg[9.1f( v 54'454 (0'0,54'454] lb MIN local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=-1 lpr=117 DELETING pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.108025 2 0.000243
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 118 pg[9.1f( v 54'454 (0'0,54'454] lb MIN local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=-1 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.108163 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 pg_epoch: 118 pg[9.1f( v 54'454 (0'0,54'454] lb MIN local-lis/les=115/116 n=6 ec=51/44 lis/c=115/87 les/c/f=116/88/0 sis=117) [0] r=-1 lpr=117 pi=[87,117)/1 crt=54'454 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.349457 0 0.000000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67526656 unmapped: 491520 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 772827 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:15.667157+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca4f000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67526656 unmapped: 491520 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:16.667383+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67526656 unmapped: 491520 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:17.667568+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.10 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.814878464s of 11.049506187s, submitted: 62
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.10 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67534848 unmapped: 483328 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:18.667749+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 110 sent 108 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:48.354223+0000 osd.1 (osd.1) 109 : cluster [DBG] 3.10 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:48.363685+0000 osd.1 (osd.1) 110 : cluster [DBG] 3.10 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 110) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:48.354223+0000 osd.1 (osd.1) 109 : cluster [DBG] 3.10 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:48.363685+0000 osd.1 (osd.1) 110 : cluster [DBG] 3.10 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67543040 unmapped: 475136 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:19.668071+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 112 sent 110 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:49.396230+0000 osd.1 (osd.1) 111 : cluster [DBG] 5.16 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:49.406011+0000 osd.1 (osd.1) 112 : cluster [DBG] 5.16 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 112) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:49.396230+0000 osd.1 (osd.1) 111 : cluster [DBG] 5.16 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:49.406011+0000 osd.1 (osd.1) 112 : cluster [DBG] 5.16 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67559424 unmapped: 458752 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 773571 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:20.668335+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67559424 unmapped: 458752 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:21.668578+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 114 sent 112 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:51.338261+0000 osd.1 (osd.1) 113 : cluster [DBG] 3.f scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:52:51.348065+0000 osd.1 (osd.1) 114 : cluster [DBG] 3.f scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 114) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:51.338261+0000 osd.1 (osd.1) 113 : cluster [DBG] 3.f scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:52:51.348065+0000 osd.1 (osd.1) 114 : cluster [DBG] 3.f scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67575808 unmapped: 442368 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:22.668775+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67641344 unmapped: 376832 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:23.668963+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67657728 unmapped: 360448 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:24.669128+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67674112 unmapped: 344064 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 774718 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:25.669270+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67674112 unmapped: 344064 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:26.669462+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67698688 unmapped: 319488 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:27.669714+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67698688 unmapped: 319488 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:28.669834+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67715072 unmapped: 303104 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:29.669931+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67731456 unmapped: 286720 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 774718 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:30.670062+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.854300499s of 13.048527718s, submitted: 6
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67731456 unmapped: 286720 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:31.670201+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 116 sent 114 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:01.402490+0000 osd.1 (osd.1) 115 : cluster [DBG] 6.15 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:01.417269+0000 osd.1 (osd.1) 116 : cluster [DBG] 6.15 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67764224 unmapped: 253952 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 116) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:01.402490+0000 osd.1 (osd.1) 115 : cluster [DBG] 6.15 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:01.417269+0000 osd.1 (osd.1) 116 : cluster [DBG] 6.15 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:32.670426+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 118 sent 116 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:02.388625+0000 osd.1 (osd.1) 117 : cluster [DBG] 5.15 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:02.398458+0000 osd.1 (osd.1) 118 : cluster [DBG] 5.15 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67764224 unmapped: 253952 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:33.670549+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 4 last_log 120 sent 118 num 4 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:03.391990+0000 osd.1 (osd.1) 119 : cluster [DBG] 3.13 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:03.401760+0000 osd.1 (osd.1) 120 : cluster [DBG] 3.13 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 118) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:02.388625+0000 osd.1 (osd.1) 117 : cluster [DBG] 5.15 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:02.398458+0000 osd.1 (osd.1) 118 : cluster [DBG] 5.15 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67780608 unmapped: 237568 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:34.670739+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 4 last_log 122 sent 120 num 4 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:04.398675+0000 osd.1 (osd.1) 121 : cluster [DBG] 3.14 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:04.408462+0000 osd.1 (osd.1) 122 : cluster [DBG] 3.14 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 120) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:03.391990+0000 osd.1 (osd.1) 119 : cluster [DBG] 3.13 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:03.401760+0000 osd.1 (osd.1) 120 : cluster [DBG] 3.13 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 122) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:04.398675+0000 osd.1 (osd.1) 121 : cluster [DBG] 3.14 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:04.408462+0000 osd.1 (osd.1) 122 : cluster [DBG] 3.14 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67788800 unmapped: 229376 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 780458 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:35.670947+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 124 sent 122 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:05.439293+0000 osd.1 (osd.1) 123 : cluster [DBG] 4.13 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:05.449050+0000 osd.1 (osd.1) 124 : cluster [DBG] 4.13 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 124) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:05.439293+0000 osd.1 (osd.1) 123 : cluster [DBG] 4.13 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:05.449050+0000 osd.1 (osd.1) 124 : cluster [DBG] 4.13 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67796992 unmapped: 221184 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:36.671184+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67805184 unmapped: 212992 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:37.671309+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67821568 unmapped: 196608 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:38.671442+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 126 sent 124 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:08.366946+0000 osd.1 (osd.1) 125 : cluster [DBG] 5.11 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:08.376795+0000 osd.1 (osd.1) 126 : cluster [DBG] 5.11 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 126) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:08.366946+0000 osd.1 (osd.1) 125 : cluster [DBG] 5.11 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:08.376795+0000 osd.1 (osd.1) 126 : cluster [DBG] 5.11 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67821568 unmapped: 196608 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:39.671623+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67846144 unmapped: 172032 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 782754 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:40.671808+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 128 sent 126 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:10.376043+0000 osd.1 (osd.1) 127 : cluster [DBG] 3.16 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:10.385895+0000 osd.1 (osd.1) 128 : cluster [DBG] 3.16 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 128) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:10.376043+0000 osd.1 (osd.1) 127 : cluster [DBG] 3.16 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:10.385895+0000 osd.1 (osd.1) 128 : cluster [DBG] 3.16 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67846144 unmapped: 172032 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:41.672042+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.566414833s of 10.932034492s, submitted: 14
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67862528 unmapped: 155648 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:42.672202+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 130 sent 128 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:12.334561+0000 osd.1 (osd.1) 129 : cluster [DBG] 5.1f scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:12.344343+0000 osd.1 (osd.1) 130 : cluster [DBG] 5.1f scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 130) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:12.334561+0000 osd.1 (osd.1) 129 : cluster [DBG] 5.1f scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:12.344343+0000 osd.1 (osd.1) 130 : cluster [DBG] 5.1f scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67862528 unmapped: 155648 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:43.672590+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67903488 unmapped: 114688 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:44.672751+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 132 sent 130 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:14.262160+0000 osd.1 (osd.1) 131 : cluster [DBG] 5.10 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:14.271907+0000 osd.1 (osd.1) 132 : cluster [DBG] 5.10 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67928064 unmapped: 90112 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 785050 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:45.672991+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 132) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:14.262160+0000 osd.1 (osd.1) 131 : cluster [DBG] 5.10 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:14.271907+0000 osd.1 (osd.1) 132 : cluster [DBG] 5.10 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.6 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.6 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67928064 unmapped: 90112 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:46.673119+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 134 sent 132 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:16.305177+0000 osd.1 (osd.1) 133 : cluster [DBG] 10.6 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:16.314869+0000 osd.1 (osd.1) 134 : cluster [DBG] 10.6 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 134) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:16.305177+0000 osd.1 (osd.1) 133 : cluster [DBG] 10.6 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:16.314869+0000 osd.1 (osd.1) 134 : cluster [DBG] 10.6 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67960832 unmapped: 57344 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:47.673297+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67960832 unmapped: 57344 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:48.673445+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67960832 unmapped: 57344 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:49.673585+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67985408 unmapped: 32768 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 786198 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:50.673732+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 67985408 unmapped: 32768 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:51.673869+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.853096008s of 10.008735657s, submitted: 7
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68001792 unmapped: 16384 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:52.674043+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 136 sent 134 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:22.318705+0000 osd.1 (osd.1) 135 : cluster [DBG] 10.7 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:22.328740+0000 osd.1 (osd.1) 136 : cluster [DBG] 10.7 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 136) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:22.318705+0000 osd.1 (osd.1) 135 : cluster [DBG] 10.7 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:22.328740+0000 osd.1 (osd.1) 136 : cluster [DBG] 10.7 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68001792 unmapped: 16384 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:53.674257+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68018176 unmapped: 0 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:54.674400+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68018176 unmapped: 0 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 788494 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:55.674542+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 138 sent 136 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:25.379804+0000 osd.1 (osd.1) 137 : cluster [DBG] 10.9 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:25.392199+0000 osd.1 (osd.1) 138 : cluster [DBG] 10.9 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68018176 unmapped: 0 heap: 68018176 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 138) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:25.379804+0000 osd.1 (osd.1) 137 : cluster [DBG] 10.9 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:25.392199+0000 osd.1 (osd.1) 138 : cluster [DBG] 10.9 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:56.674733+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68042752 unmapped: 1024000 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:57.675014+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 140 sent 138 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:27.370563+0000 osd.1 (osd.1) 139 : cluster [DBG] 10.a scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:27.380360+0000 osd.1 (osd.1) 140 : cluster [DBG] 10.a scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68042752 unmapped: 1024000 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 140) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:27.370563+0000 osd.1 (osd.1) 139 : cluster [DBG] 10.a scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:27.380360+0000 osd.1 (osd.1) 140 : cluster [DBG] 10.a scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:58.675287+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68050944 unmapped: 1015808 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:52:59.675429+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68075520 unmapped: 991232 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 789642 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:00.675618+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68083712 unmapped: 983040 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:01.675773+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68083712 unmapped: 983040 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:02.675934+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68091904 unmapped: 974848 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:03.676501+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68108288 unmapped: 958464 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:04.676637+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68116480 unmapped: 950272 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 789642 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:05.676776+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68132864 unmapped: 933888 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:06.676966+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68157440 unmapped: 909312 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:07.677112+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68165632 unmapped: 901120 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:08.677238+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68165632 unmapped: 901120 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:09.677404+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68182016 unmapped: 884736 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 789642 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:10.677538+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68190208 unmapped: 876544 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:11.677681+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.673984528s of 19.763648987s, submitted: 5
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68190208 unmapped: 876544 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:12.677928+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 142 sent 140 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:42.107290+0000 osd.1 (osd.1) 141 : cluster [DBG] 10.b scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:42.116934+0000 osd.1 (osd.1) 142 : cluster [DBG] 10.b scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 142) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:42.107290+0000 osd.1 (osd.1) 141 : cluster [DBG] 10.b scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:42.116934+0000 osd.1 (osd.1) 142 : cluster [DBG] 10.b scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68198400 unmapped: 868352 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:13.678254+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68214784 unmapped: 851968 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:14.678481+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68231168 unmapped: 835584 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 790790 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:15.678655+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68239360 unmapped: 827392 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:16.680318+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68255744 unmapped: 811008 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:17.680517+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68255744 unmapped: 811008 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:18.680718+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 144 sent 142 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:48.044560+0000 osd.1 (osd.1) 143 : cluster [DBG] 10.c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:48.054142+0000 osd.1 (osd.1) 144 : cluster [DBG] 10.c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 144) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:48.044560+0000 osd.1 (osd.1) 143 : cluster [DBG] 10.c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:48.054142+0000 osd.1 (osd.1) 144 : cluster [DBG] 10.c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68272128 unmapped: 794624 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:19.680952+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 146 sent 144 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:49.048960+0000 osd.1 (osd.1) 145 : cluster [DBG] 10.d scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:49.061358+0000 osd.1 (osd.1) 146 : cluster [DBG] 10.d scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 146) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:49.048960+0000 osd.1 (osd.1) 145 : cluster [DBG] 10.d scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:49.061358+0000 osd.1 (osd.1) 146 : cluster [DBG] 10.d scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68272128 unmapped: 794624 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 793086 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:20.681233+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68280320 unmapped: 786432 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:21.681420+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 148 sent 146 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:51.027779+0000 osd.1 (osd.1) 147 : cluster [DBG] 10.e scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:51.040085+0000 osd.1 (osd.1) 148 : cluster [DBG] 10.e scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 148) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:51.027779+0000 osd.1 (osd.1) 147 : cluster [DBG] 10.e scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:51.040085+0000 osd.1 (osd.1) 148 : cluster [DBG] 10.e scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68296704 unmapped: 770048 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:22.681689+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68296704 unmapped: 770048 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:23.681901+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68313088 unmapped: 753664 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:24.682077+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.826052666s of 12.989222527s, submitted: 8
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68337664 unmapped: 729088 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 795383 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:25.682238+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 150 sent 148 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:55.096478+0000 osd.1 (osd.1) 149 : cluster [DBG] 10.16 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:53:55.106109+0000 osd.1 (osd.1) 150 : cluster [DBG] 10.16 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 150) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:55.096478+0000 osd.1 (osd.1) 149 : cluster [DBG] 10.16 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:53:55.106109+0000 osd.1 (osd.1) 150 : cluster [DBG] 10.16 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68337664 unmapped: 729088 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:26.682432+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68345856 unmapped: 720896 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:27.682631+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:28.682805+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68345856 unmapped: 720896 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:29.682967+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68345856 unmapped: 720896 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:30.683109+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 152 sent 150 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:00.119698+0000 osd.1 (osd.1) 151 : cluster [DBG] 10.17 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:00.129497+0000 osd.1 (osd.1) 152 : cluster [DBG] 10.17 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68370432 unmapped: 696320 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 796532 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:31.683308+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68370432 unmapped: 696320 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 152) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:00.119698+0000 osd.1 (osd.1) 151 : cluster [DBG] 10.17 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:00.129497+0000 osd.1 (osd.1) 152 : cluster [DBG] 10.17 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:32.683514+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68378624 unmapped: 688128 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:33.683689+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68378624 unmapped: 688128 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:34.684020+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68395008 unmapped: 671744 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.894586563s of 10.064516068s, submitted: 4
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:35.684229+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 154 sent 152 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:05.161385+0000 osd.1 (osd.1) 153 : cluster [DBG] 10.1a scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:05.170671+0000 osd.1 (osd.1) 154 : cluster [DBG] 10.1a scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68419584 unmapped: 647168 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 797681 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 154) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:05.161385+0000 osd.1 (osd.1) 153 : cluster [DBG] 10.1a scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:05.170671+0000 osd.1 (osd.1) 154 : cluster [DBG] 10.1a scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:36.684562+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68419584 unmapped: 647168 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:37.684828+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68427776 unmapped: 638976 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:38.684989+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68435968 unmapped: 630784 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:39.685154+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 156 sent 154 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:09.203358+0000 osd.1 (osd.1) 155 : cluster [DBG] 10.1c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:09.213190+0000 osd.1 (osd.1) 156 : cluster [DBG] 10.1c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68444160 unmapped: 622592 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 156) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:09.203358+0000 osd.1 (osd.1) 155 : cluster [DBG] 10.1c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:09.213190+0000 osd.1 (osd.1) 156 : cluster [DBG] 10.1c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:40.685401+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68272128 unmapped: 794624 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 798830 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:41.685633+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 158 sent 156 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:11.253529+0000 osd.1 (osd.1) 157 : cluster [DBG] 10.1d scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:11.263391+0000 osd.1 (osd.1) 158 : cluster [DBG] 10.1d scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68280320 unmapped: 786432 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 158) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:11.253529+0000 osd.1 (osd.1) 157 : cluster [DBG] 10.1d scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:11.263391+0000 osd.1 (osd.1) 158 : cluster [DBG] 10.1d scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:42.685942+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 160 sent 158 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:12.288992+0000 osd.1 (osd.1) 159 : cluster [DBG] 10.1f scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:12.301241+0000 osd.1 (osd.1) 160 : cluster [DBG] 10.1f scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68280320 unmapped: 786432 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 160) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:12.288992+0000 osd.1 (osd.1) 159 : cluster [DBG] 10.1f scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:12.301241+0000 osd.1 (osd.1) 160 : cluster [DBG] 10.1f scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:43.686204+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 162 sent 160 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:13.319206+0000 osd.1 (osd.1) 161 : cluster [DBG] 8.14 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:13.328979+0000 osd.1 (osd.1) 162 : cluster [DBG] 8.14 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68288512 unmapped: 778240 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 162) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:13.319206+0000 osd.1 (osd.1) 161 : cluster [DBG] 8.14 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:13.328979+0000 osd.1 (osd.1) 162 : cluster [DBG] 8.14 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:44.686400+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68288512 unmapped: 778240 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.108569145s of 10.175421715s, submitted: 10
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:45.686572+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 164 sent 162 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:15.336459+0000 osd.1 (osd.1) 163 : cluster [DBG] 11.12 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:15.346264+0000 osd.1 (osd.1) 164 : cluster [DBG] 11.12 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68304896 unmapped: 761856 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 803425 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:46.686847+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68304896 unmapped: 761856 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 164) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:15.336459+0000 osd.1 (osd.1) 163 : cluster [DBG] 11.12 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:15.346264+0000 osd.1 (osd.1) 164 : cluster [DBG] 11.12 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.14 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.14 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:47.687076+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 166 sent 164 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:17.339930+0000 osd.1 (osd.1) 165 : cluster [DBG] 11.14 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:17.349548+0000 osd.1 (osd.1) 166 : cluster [DBG] 11.14 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68321280 unmapped: 745472 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 166) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:17.339930+0000 osd.1 (osd.1) 165 : cluster [DBG] 11.14 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:17.349548+0000 osd.1 (osd.1) 166 : cluster [DBG] 11.14 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:48.687316+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68345856 unmapped: 720896 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:49.687496+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68354048 unmapped: 712704 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:50.687664+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68362240 unmapped: 704512 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 804574 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:51.687796+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68370432 unmapped: 696320 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:52.687941+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68370432 unmapped: 696320 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:53.688124+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68378624 unmapped: 688128 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:54.688307+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68378624 unmapped: 688128 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:55.688466+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68395008 unmapped: 671744 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 804574 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:56.688640+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68395008 unmapped: 671744 heap: 69066752 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.10 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.789650917s of 12.051115036s, submitted: 4
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.10 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:57.688806+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 168 sent 166 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:27.388125+0000 osd.1 (osd.1) 167 : cluster [DBG] 8.10 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:27.397976+0000 osd.1 (osd.1) 168 : cluster [DBG] 8.10 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 68403200 unmapped: 1712128 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 168) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:27.388125+0000 osd.1 (osd.1) 167 : cluster [DBG] 8.10 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:27.397976+0000 osd.1 (osd.1) 168 : cluster [DBG] 8.10 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:58.689096+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 170 sent 168 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:28.346820+0000 osd.1 (osd.1) 169 : cluster [DBG] 11.1 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:28.357303+0000 osd.1 (osd.1) 170 : cluster [DBG] 11.1 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69468160 unmapped: 647168 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 170) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:28.346820+0000 osd.1 (osd.1) 169 : cluster [DBG] 11.1 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:28.357303+0000 osd.1 (osd.1) 170 : cluster [DBG] 11.1 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:53:59.689313+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69468160 unmapped: 647168 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:00.689475+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 172 sent 170 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:30.351366+0000 osd.1 (osd.1) 171 : cluster [DBG] 8.8 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:30.363627+0000 osd.1 (osd.1) 172 : cluster [DBG] 8.8 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69484544 unmapped: 630784 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 808017 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 172) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:30.351366+0000 osd.1 (osd.1) 171 : cluster [DBG] 8.8 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:30.363627+0000 osd.1 (osd.1) 172 : cluster [DBG] 8.8 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:01.689672+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69484544 unmapped: 630784 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:02.689830+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69492736 unmapped: 622592 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:03.690227+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69492736 unmapped: 622592 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:04.690427+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69492736 unmapped: 622592 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:05.690592+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69500928 unmapped: 614400 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 808017 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:06.690772+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69500928 unmapped: 614400 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.940105438s of 10.076905251s, submitted: 7
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:07.690979+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 174 sent 172 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:37.383726+0000 osd.1 (osd.1) 173 : cluster [DBG] 8.17 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:37.393558+0000 osd.1 (osd.1) 174 : cluster [DBG] 8.17 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69517312 unmapped: 598016 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 174) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:37.383726+0000 osd.1 (osd.1) 173 : cluster [DBG] 8.17 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:37.393558+0000 osd.1 (osd.1) 174 : cluster [DBG] 8.17 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:08.691234+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69525504 unmapped: 589824 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:09.691401+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 176 sent 174 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:39.378988+0000 osd.1 (osd.1) 175 : cluster [DBG] 11.f scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:39.388446+0000 osd.1 (osd.1) 176 : cluster [DBG] 11.f scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69525504 unmapped: 589824 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 176) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:39.378988+0000 osd.1 (osd.1) 175 : cluster [DBG] 11.f scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:39.388446+0000 osd.1 (osd.1) 176 : cluster [DBG] 11.f scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:10.691599+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 178 sent 176 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:40.358671+0000 osd.1 (osd.1) 177 : cluster [DBG] 11.5 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:40.368400+0000 osd.1 (osd.1) 178 : cluster [DBG] 11.5 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69533696 unmapped: 581632 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 811461 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 178) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:40.358671+0000 osd.1 (osd.1) 177 : cluster [DBG] 11.5 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:40.368400+0000 osd.1 (osd.1) 178 : cluster [DBG] 11.5 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:11.691789+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69533696 unmapped: 581632 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:12.691956+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69533696 unmapped: 581632 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:13.692155+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 180 sent 178 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:43.292735+0000 osd.1 (osd.1) 179 : cluster [DBG] 8.4 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:43.302471+0000 osd.1 (osd.1) 180 : cluster [DBG] 8.4 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69550080 unmapped: 565248 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 180) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:43.292735+0000 osd.1 (osd.1) 179 : cluster [DBG] 8.4 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:43.302471+0000 osd.1 (osd.1) 180 : cluster [DBG] 8.4 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:14.692349+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69558272 unmapped: 557056 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.4 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.4 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:15.692488+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 182 sent 180 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:45.296792+0000 osd.1 (osd.1) 181 : cluster [DBG] 11.4 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:45.305907+0000 osd.1 (osd.1) 182 : cluster [DBG] 11.4 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69582848 unmapped: 532480 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 813756 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:16.692713+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 182) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:45.296792+0000 osd.1 (osd.1) 181 : cluster [DBG] 11.4 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:45.305907+0000 osd.1 (osd.1) 182 : cluster [DBG] 11.4 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69591040 unmapped: 524288 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.7 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.7 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:17.692913+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 184 sent 182 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:47.317211+0000 osd.1 (osd.1) 183 : cluster [DBG] 11.7 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:47.326964+0000 osd.1 (osd.1) 184 : cluster [DBG] 11.7 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 184) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:47.317211+0000 osd.1 (osd.1) 183 : cluster [DBG] 11.7 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:47.326964+0000 osd.1 (osd.1) 184 : cluster [DBG] 11.7 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69599232 unmapped: 516096 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:18.693628+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69599232 unmapped: 516096 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.835612297s of 11.911173820s, submitted: 11
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:19.693809+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 186 sent 184 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:49.376263+0000 osd.1 (osd.1) 185 : cluster [DBG] 8.1b scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:49.386087+0000 osd.1 (osd.1) 186 : cluster [DBG] 8.1b scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 186) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:49.376263+0000 osd.1 (osd.1) 185 : cluster [DBG] 8.1b scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:49.386087+0000 osd.1 (osd.1) 186 : cluster [DBG] 8.1b scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69599232 unmapped: 516096 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:20.694061+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69607424 unmapped: 507904 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 816052 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:21.694253+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69607424 unmapped: 507904 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1a deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1a deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:22.694433+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 188 sent 186 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:52.377196+0000 osd.1 (osd.1) 187 : cluster [DBG] 11.1a deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:52.387237+0000 osd.1 (osd.1) 188 : cluster [DBG] 11.1a deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 188) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:52.377196+0000 osd.1 (osd.1) 187 : cluster [DBG] 11.1a deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:52.387237+0000 osd.1 (osd.1) 188 : cluster [DBG] 11.1a deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69615616 unmapped: 499712 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:23.694656+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69632000 unmapped: 483328 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:24.694813+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69632000 unmapped: 483328 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:25.694978+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 190 sent 188 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:55.447466+0000 osd.1 (osd.1) 189 : cluster [DBG] 11.1b scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:55.457219+0000 osd.1 (osd.1) 190 : cluster [DBG] 11.1b scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 190) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:55.447466+0000 osd.1 (osd.1) 189 : cluster [DBG] 11.1b scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:55.457219+0000 osd.1 (osd.1) 190 : cluster [DBG] 11.1b scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69640192 unmapped: 475136 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 818350 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.18 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.18 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:26.695203+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 192 sent 190 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:56.481559+0000 osd.1 (osd.1) 191 : cluster [DBG] 8.18 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:56.491281+0000 osd.1 (osd.1) 192 : cluster [DBG] 8.18 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69640192 unmapped: 475136 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 192) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:56.481559+0000 osd.1 (osd.1) 191 : cluster [DBG] 8.18 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:56.491281+0000 osd.1 (osd.1) 192 : cluster [DBG] 8.18 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:27.695408+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69648384 unmapped: 466944 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:28.695622+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69640192 unmapped: 475136 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.068423271s of 10.095142365s, submitted: 8
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:29.695767+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 194 sent 192 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:59.471563+0000 osd.1 (osd.1) 193 : cluster [DBG] 11.1d scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:54:59.481352+0000 osd.1 (osd.1) 194 : cluster [DBG] 11.1d scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69640192 unmapped: 475136 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 194) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:59.471563+0000 osd.1 (osd.1) 193 : cluster [DBG] 11.1d scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:54:59.481352+0000 osd.1 (osd.1) 194 : cluster [DBG] 11.1d scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:30.695992+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 196 sent 194 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:00.517541+0000 osd.1 (osd.1) 195 : cluster [DBG] 8.19 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:00.527448+0000 osd.1 (osd.1) 196 : cluster [DBG] 8.19 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69648384 unmapped: 466944 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 821795 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 196) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:00.517541+0000 osd.1 (osd.1) 195 : cluster [DBG] 8.19 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:00.527448+0000 osd.1 (osd.1) 196 : cluster [DBG] 8.19 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:31.696193+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 198 sent 196 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:01.551036+0000 osd.1 (osd.1) 197 : cluster [DBG] 11.1c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:01.560818+0000 osd.1 (osd.1) 198 : cluster [DBG] 11.1c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69648384 unmapped: 466944 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 198) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:01.551036+0000 osd.1 (osd.1) 197 : cluster [DBG] 11.1c scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:01.560818+0000 osd.1 (osd.1) 198 : cluster [DBG] 11.1c scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:32.696415+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69656576 unmapped: 458752 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:33.696604+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 200 sent 198 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:03.593246+0000 osd.1 (osd.1) 199 : cluster [DBG] 11.1e scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:03.603044+0000 osd.1 (osd.1) 200 : cluster [DBG] 11.1e scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69664768 unmapped: 450560 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 200) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:03.593246+0000 osd.1 (osd.1) 199 : cluster [DBG] 11.1e scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:03.603044+0000 osd.1 (osd.1) 200 : cluster [DBG] 11.1e scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:34.696813+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69672960 unmapped: 442368 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.12 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 8.12 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:35.696964+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 202 sent 200 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:05.609967+0000 osd.1 (osd.1) 201 : cluster [DBG] 8.12 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:05.619605+0000 osd.1 (osd.1) 202 : cluster [DBG] 8.12 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69672960 unmapped: 442368 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 825241 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 202) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:05.609967+0000 osd.1 (osd.1) 201 : cluster [DBG] 8.12 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:05.619605+0000 osd.1 (osd.1) 202 : cluster [DBG] 8.12 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:36.697172+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69672960 unmapped: 442368 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:37.697298+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 204 sent 202 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:07.593601+0000 osd.1 (osd.1) 203 : cluster [DBG] 9.6 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:07.618288+0000 osd.1 (osd.1) 204 : cluster [DBG] 9.6 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69681152 unmapped: 434176 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 204) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:07.593601+0000 osd.1 (osd.1) 203 : cluster [DBG] 9.6 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:07.618288+0000 osd.1 (osd.1) 204 : cluster [DBG] 9.6 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:38.697484+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69681152 unmapped: 434176 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:39.697648+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69689344 unmapped: 425984 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.881838799s of 11.135094643s, submitted: 12
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:40.697825+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 206 sent 204 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:10.606704+0000 osd.1 (osd.1) 205 : cluster [DBG] 9.e scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:10.628922+0000 osd.1 (osd.1) 206 : cluster [DBG] 9.e scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69689344 unmapped: 425984 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 827535 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 206) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:10.606704+0000 osd.1 (osd.1) 205 : cluster [DBG] 9.e scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:10.628922+0000 osd.1 (osd.1) 206 : cluster [DBG] 9.e scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:41.698052+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69689344 unmapped: 425984 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:42.698201+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69697536 unmapped: 417792 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:43.698367+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69697536 unmapped: 417792 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:44.698524+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69697536 unmapped: 417792 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:45.698667+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69705728 unmapped: 409600 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 827535 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:46.698816+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69705728 unmapped: 409600 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:47.698964+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69713920 unmapped: 401408 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:48.699102+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69722112 unmapped: 393216 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:49.699255+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 208 sent 206 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:19.596447+0000 osd.1 (osd.1) 207 : cluster [DBG] 9.a scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:19.628399+0000 osd.1 (osd.1) 208 : cluster [DBG] 9.a scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69730304 unmapped: 385024 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 208) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:19.596447+0000 osd.1 (osd.1) 207 : cluster [DBG] 9.a scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:19.628399+0000 osd.1 (osd.1) 208 : cluster [DBG] 9.a scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.d deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.023387909s of 10.035495758s, submitted: 4
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.d deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:50.699494+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 210 sent 208 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:20.642324+0000 osd.1 (osd.1) 209 : cluster [DBG] 9.d deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:20.671847+0000 osd.1 (osd.1) 210 : cluster [DBG] 9.d deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69730304 unmapped: 385024 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 829829 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 210) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:20.642324+0000 osd.1 (osd.1) 209 : cluster [DBG] 9.d deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:20.671847+0000 osd.1 (osd.1) 210 : cluster [DBG] 9.d deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:51.699749+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69730304 unmapped: 385024 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:52.699955+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 1 last_log 211 sent 210 num 1 unsent 1 sending 1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:22.681590+0000 osd.1 (osd.1) 211 : cluster [DBG] 9.f scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69738496 unmapped: 376832 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 211) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:22.681590+0000 osd.1 (osd.1) 211 : cluster [DBG] 9.f scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:53.700158+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 1 last_log 212 sent 211 num 1 unsent 1 sending 1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:22.708751+0000 osd.1 (osd.1) 212 : cluster [DBG] 9.f scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69746688 unmapped: 368640 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:54.700372+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 3 last_log 214 sent 212 num 3 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:23.713651+0000 osd.1 (osd.1) 213 : cluster [DBG] 9.10 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:23.728024+0000 osd.1 (osd.1) 214 : cluster [DBG] 9.10 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.11 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.11 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 212) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:22.708751+0000 osd.1 (osd.1) 212 : cluster [DBG] 9.f scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69746688 unmapped: 368640 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:55.700677+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 4 last_log 216 sent 214 num 4 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:24.700716+0000 osd.1 (osd.1) 215 : cluster [DBG] 9.11 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:24.722947+0000 osd.1 (osd.1) 216 : cluster [DBG] 9.11 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 214) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:23.713651+0000 osd.1 (osd.1) 213 : cluster [DBG] 9.10 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:23.728024+0000 osd.1 (osd.1) 214 : cluster [DBG] 9.10 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 216) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:24.700716+0000 osd.1 (osd.1) 215 : cluster [DBG] 9.11 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:24.722947+0000 osd.1 (osd.1) 216 : cluster [DBG] 9.11 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69754880 unmapped: 360448 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 833272 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:56.700885+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69754880 unmapped: 360448 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:57.701035+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 218 sent 216 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:26.722903+0000 osd.1 (osd.1) 217 : cluster [DBG] 9.12 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:26.742542+0000 osd.1 (osd.1) 218 : cluster [DBG] 9.12 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69763072 unmapped: 352256 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:58.701229+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69771264 unmapped: 344064 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:54:59.701412+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69779456 unmapped: 335872 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 218) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:26.722903+0000 osd.1 (osd.1) 217 : cluster [DBG] 9.12 scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:26.742542+0000 osd.1 (osd.1) 218 : cluster [DBG] 9.12 scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:00.701573+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69779456 unmapped: 335872 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 834420 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:01.701724+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69779456 unmapped: 335872 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:02.701964+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.15 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.921080589s of 12.146616936s, submitted: 10
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_channel(cluster) log [DBG] : 9.15 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69795840 unmapped: 319488 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:03.702260+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  log_queue is 2 last_log 220 sent 218 num 2 unsent 2 sending 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:32.789007+0000 osd.1 (osd.1) 219 : cluster [DBG] 9.15 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  will send 2026-01-31T06:55:32.808729+0000 osd.1 (osd.1) 220 : cluster [DBG] 9.15 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69795840 unmapped: 319488 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:04.702492+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69804032 unmapped: 311296 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:05.702686+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69804032 unmapped: 311296 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:06.702953+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69804032 unmapped: 311296 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:07.703105+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69804032 unmapped: 311296 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:08.703190+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69804032 unmapped: 311296 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:09.703327+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69812224 unmapped: 303104 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:10.703457+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69812224 unmapped: 303104 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:11.703574+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69812224 unmapped: 303104 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:12.703762+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69820416 unmapped: 294912 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:13.703908+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client handle_log_ack log(last 220) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:32.789007+0000 osd.1 (osd.1) 219 : cluster [DBG] 9.15 deep-scrub starts
Jan 31 07:16:52 compute-1 ceph-osd[79145]: log_client  logged 2026-01-31T06:55:32.808729+0000 osd.1 (osd.1) 220 : cluster [DBG] 9.15 deep-scrub ok
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69820416 unmapped: 294912 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:14.704119+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69820416 unmapped: 294912 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:15.704262+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69828608 unmapped: 286720 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:16.704406+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69828608 unmapped: 286720 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:17.704531+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69836800 unmapped: 278528 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:18.704704+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69836800 unmapped: 278528 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:19.704914+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69836800 unmapped: 278528 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:20.705064+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69853184 unmapped: 262144 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:21.705231+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69853184 unmapped: 262144 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:22.705364+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69861376 unmapped: 253952 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:23.705576+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69861376 unmapped: 253952 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:24.705740+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69861376 unmapped: 253952 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:25.705937+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69861376 unmapped: 253952 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:26.706110+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69869568 unmapped: 245760 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:27.706284+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69877760 unmapped: 237568 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:28.706487+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69885952 unmapped: 229376 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:29.706682+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69885952 unmapped: 229376 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:30.706876+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69894144 unmapped: 221184 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:31.707047+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69894144 unmapped: 221184 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:32.707269+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69902336 unmapped: 212992 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:33.707754+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69910528 unmapped: 204800 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:34.708169+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69910528 unmapped: 204800 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:35.708411+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69918720 unmapped: 196608 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:36.708604+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69918720 unmapped: 196608 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:37.708801+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69918720 unmapped: 196608 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:38.709109+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69918720 unmapped: 196608 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:39.709302+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69926912 unmapped: 188416 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:40.709477+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69926912 unmapped: 188416 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:41.709746+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69935104 unmapped: 180224 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:42.709986+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69935104 unmapped: 180224 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:43.710183+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69935104 unmapped: 180224 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:44.710435+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69943296 unmapped: 172032 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:45.710722+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69943296 unmapped: 172032 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:46.710959+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69951488 unmapped: 163840 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:47.711279+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69951488 unmapped: 163840 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:48.711550+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69959680 unmapped: 155648 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:49.711796+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69959680 unmapped: 155648 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:50.712057+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69967872 unmapped: 147456 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:51.712310+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69967872 unmapped: 147456 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:52.712507+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69967872 unmapped: 147456 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:53.712740+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69984256 unmapped: 131072 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:54.712954+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69984256 unmapped: 131072 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:55.713148+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69984256 unmapped: 131072 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:56.713317+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69992448 unmapped: 122880 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:57.713528+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69992448 unmapped: 122880 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:58.713672+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 69992448 unmapped: 122880 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:55:59.713952+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70000640 unmapped: 114688 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:00.714132+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70000640 unmapped: 114688 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:01.714355+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70008832 unmapped: 106496 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:02.714539+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70008832 unmapped: 106496 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:03.714810+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70008832 unmapped: 106496 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:04.714961+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70017024 unmapped: 98304 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:05.715102+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70017024 unmapped: 98304 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:06.715297+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70025216 unmapped: 90112 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:07.715562+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70025216 unmapped: 90112 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:08.715806+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70025216 unmapped: 90112 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:09.716038+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70033408 unmapped: 81920 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:10.716219+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70033408 unmapped: 81920 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:11.716441+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70033408 unmapped: 81920 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:12.716677+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70041600 unmapped: 73728 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:13.716971+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70041600 unmapped: 73728 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:14.717124+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70049792 unmapped: 65536 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:15.717338+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70049792 unmapped: 65536 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:16.717516+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70049792 unmapped: 65536 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:17.717743+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70066176 unmapped: 49152 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:18.717928+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70066176 unmapped: 49152 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:19.718184+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70066176 unmapped: 49152 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:20.718384+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70074368 unmapped: 40960 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:21.718625+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70074368 unmapped: 40960 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:22.718784+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70082560 unmapped: 32768 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:23.719033+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70082560 unmapped: 32768 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:24.719192+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70090752 unmapped: 24576 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:25.719363+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70090752 unmapped: 24576 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:26.719521+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70090752 unmapped: 24576 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:27.719737+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70098944 unmapped: 16384 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:28.719960+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70107136 unmapped: 8192 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:29.720157+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70107136 unmapped: 8192 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:30.720326+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70115328 unmapped: 0 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:31.720594+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70115328 unmapped: 0 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:32.720814+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70107136 unmapped: 8192 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:33.721089+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70115328 unmapped: 0 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:34.721390+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70115328 unmapped: 0 heap: 70115328 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:35.721612+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70123520 unmapped: 1040384 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:36.721790+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70123520 unmapped: 1040384 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:37.722036+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70123520 unmapped: 1040384 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:38.722255+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70131712 unmapped: 1032192 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:39.722432+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70131712 unmapped: 1032192 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:40.722595+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70139904 unmapped: 1024000 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:41.722797+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70139904 unmapped: 1024000 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:42.722967+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70139904 unmapped: 1024000 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:43.723320+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70148096 unmapped: 1015808 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:44.723521+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70148096 unmapped: 1015808 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:45.723700+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70148096 unmapped: 1015808 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:46.723902+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70156288 unmapped: 1007616 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:47.724081+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70156288 unmapped: 1007616 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:48.724281+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70156288 unmapped: 1007616 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:49.724446+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70164480 unmapped: 999424 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:50.724652+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70164480 unmapped: 999424 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:51.724883+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70172672 unmapped: 991232 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:52.725054+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70172672 unmapped: 991232 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:53.725428+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70189056 unmapped: 974848 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:54.725611+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70189056 unmapped: 974848 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:55.725806+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70197248 unmapped: 966656 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:56.725953+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70197248 unmapped: 966656 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:57.726150+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70205440 unmapped: 958464 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:58.726374+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70205440 unmapped: 958464 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:56:59.726624+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70213632 unmapped: 950272 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:00.726804+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70213632 unmapped: 950272 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:01.726996+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70221824 unmapped: 942080 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:02.727202+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70230016 unmapped: 933888 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:03.727587+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70230016 unmapped: 933888 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:04.727837+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70238208 unmapped: 925696 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:05.728073+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70238208 unmapped: 925696 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:06.728287+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70246400 unmapped: 917504 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:07.728528+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70246400 unmapped: 917504 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:08.728686+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70246400 unmapped: 917504 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:09.728908+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70254592 unmapped: 909312 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:10.729122+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70254592 unmapped: 909312 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:11.729303+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70262784 unmapped: 901120 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:12.729461+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70262784 unmapped: 901120 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:13.729685+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70254592 unmapped: 909312 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:14.729929+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70254592 unmapped: 909312 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:15.730102+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70262784 unmapped: 901120 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:16.730239+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70262784 unmapped: 901120 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:17.730394+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70270976 unmapped: 892928 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:18.730595+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70270976 unmapped: 892928 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:19.730736+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70270976 unmapped: 892928 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:20.730947+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70279168 unmapped: 884736 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:21.731088+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70287360 unmapped: 876544 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:22.731228+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70295552 unmapped: 868352 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:23.731394+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70295552 unmapped: 868352 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:24.731558+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70295552 unmapped: 868352 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:25.731717+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70303744 unmapped: 860160 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:26.731957+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70311936 unmapped: 851968 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:27.732145+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70320128 unmapped: 843776 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:28.732389+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70320128 unmapped: 843776 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:29.732579+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70320128 unmapped: 843776 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:30.732755+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70328320 unmapped: 835584 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:31.732930+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70328320 unmapped: 835584 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:32.733087+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70336512 unmapped: 827392 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:33.733355+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70352896 unmapped: 811008 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:34.733503+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70352896 unmapped: 811008 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:35.733657+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70361088 unmapped: 802816 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:36.733906+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70361088 unmapped: 802816 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:37.734135+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70369280 unmapped: 794624 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:38.734363+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70369280 unmapped: 794624 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:39.734585+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70369280 unmapped: 794624 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:40.734812+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70377472 unmapped: 786432 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:41.734963+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70377472 unmapped: 786432 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:42.735155+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70369280 unmapped: 794624 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:43.735539+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70377472 unmapped: 786432 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:44.736072+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70377472 unmapped: 786432 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:45.736319+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70385664 unmapped: 778240 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:46.736505+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70385664 unmapped: 778240 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:47.736651+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70402048 unmapped: 761856 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:48.736937+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70402048 unmapped: 761856 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:49.737084+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70402048 unmapped: 761856 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:50.737444+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70402048 unmapped: 761856 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:51.737595+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70410240 unmapped: 753664 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:52.737750+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70410240 unmapped: 753664 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:53.737953+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70418432 unmapped: 745472 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:54.738259+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70418432 unmapped: 745472 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:55.738463+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70418432 unmapped: 745472 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:56.738613+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70426624 unmapped: 737280 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:57.738749+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70426624 unmapped: 737280 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:58.738965+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70434816 unmapped: 729088 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:57:59.739090+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70434816 unmapped: 729088 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:00.739471+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70434816 unmapped: 729088 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:01.739589+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70443008 unmapped: 720896 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:02.739937+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70443008 unmapped: 720896 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:03.740298+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70443008 unmapped: 720896 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:04.740425+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70451200 unmapped: 712704 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:05.740668+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70451200 unmapped: 712704 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:06.740974+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70451200 unmapped: 712704 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:07.741286+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70467584 unmapped: 696320 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:08.741590+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70467584 unmapped: 696320 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:09.741715+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70475776 unmapped: 688128 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:10.742032+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70475776 unmapped: 688128 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:11.742185+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70475776 unmapped: 688128 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:12.742319+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70483968 unmapped: 679936 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:13.742462+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70483968 unmapped: 679936 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:14.742602+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70492160 unmapped: 671744 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:15.742736+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70492160 unmapped: 671744 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:16.742969+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70492160 unmapped: 671744 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:17.743102+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70500352 unmapped: 663552 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:18.743352+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70500352 unmapped: 663552 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:19.743593+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70508544 unmapped: 655360 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:20.743783+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70508544 unmapped: 655360 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:21.743962+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70508544 unmapped: 655360 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:22.744193+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70516736 unmapped: 647168 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:23.744516+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70516736 unmapped: 647168 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:24.744661+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70516736 unmapped: 647168 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:25.744842+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70524928 unmapped: 638976 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:26.745027+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70524928 unmapped: 638976 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:27.745184+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70533120 unmapped: 630784 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:28.745382+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:29.745594+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70533120 unmapped: 630784 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:30.745773+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70533120 unmapped: 630784 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:31.745893+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70541312 unmapped: 622592 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:32.746078+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70549504 unmapped: 614400 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:33.746270+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70549504 unmapped: 614400 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:34.746485+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70557696 unmapped: 606208 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:35.746661+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70557696 unmapped: 606208 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:36.746910+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70557696 unmapped: 606208 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:37.747167+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70565888 unmapped: 598016 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:38.747316+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70565888 unmapped: 598016 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5888 writes, 25K keys, 5888 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5888 writes, 867 syncs, 6.79 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5888 writes, 25K keys, 5888 commit groups, 1.0 writes per commit group, ingest: 18.81 MB, 0.03 MB/s
                                           Interval WAL: 5888 writes, 867 syncs, 6.79 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:39.747537+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70639616 unmapped: 524288 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:40.747793+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70639616 unmapped: 524288 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:41.747934+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70639616 unmapped: 524288 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:42.748078+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70647808 unmapped: 516096 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:43.748265+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70647808 unmapped: 516096 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:44.748418+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70656000 unmapped: 507904 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:45.748577+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70656000 unmapped: 507904 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:46.748798+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70656000 unmapped: 507904 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:47.748945+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70664192 unmapped: 499712 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:48.749092+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70664192 unmapped: 499712 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:49.749233+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70664192 unmapped: 499712 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:50.749461+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70672384 unmapped: 491520 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:51.749644+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70672384 unmapped: 491520 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:52.749831+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70680576 unmapped: 483328 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:53.750092+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70680576 unmapped: 483328 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:54.750237+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70680576 unmapped: 483328 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:55.750398+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70688768 unmapped: 475136 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:56.750579+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70688768 unmapped: 475136 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:57.750718+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70680576 unmapped: 483328 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:58.750846+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70688768 unmapped: 475136 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:58:59.751012+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70688768 unmapped: 475136 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:00.751383+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70696960 unmapped: 466944 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:01.751541+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70696960 unmapped: 466944 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:02.751706+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70705152 unmapped: 458752 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:03.751894+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70705152 unmapped: 458752 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:04.752057+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70705152 unmapped: 458752 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:05.752185+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70705152 unmapped: 458752 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:06.752335+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70713344 unmapped: 450560 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:07.752542+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70713344 unmapped: 450560 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:08.752704+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70721536 unmapped: 442368 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:09.752877+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70721536 unmapped: 442368 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:10.753041+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70729728 unmapped: 434176 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:11.753243+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70729728 unmapped: 434176 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:12.753412+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70729728 unmapped: 434176 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:13.753606+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70737920 unmapped: 425984 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:14.753791+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70737920 unmapped: 425984 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:15.753976+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70729728 unmapped: 434176 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:16.754112+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.04
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 3872164391 kv_alloc: 1677721600 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1610612736 meta_used: 835568 data_alloc: 285212672 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 5502906777 mapped: 70746112 unmapped: 417792 heap: 71163904 old mem: 3872164391 new mem: 3872164391
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: handle_config config(28 keys) v1
Jan 31 07:16:52 compute-1 ceph-osd[79145]: set_mon_vals no callback set
Jan 31 07:16:52 compute-1 ceph-osd[79145]: set_mon_vals failed to set rgw_keystone_accepted_admin_roles = ResellerAdmin, swiftoperator: Configuration option 'rgw_keystone_accepted_admin_roles' may not be modified at runtime
Jan 31 07:16:52 compute-1 ceph-osd[79145]: set_mon_vals failed to set rgw_keystone_accepted_roles = member, Member, admin: Configuration option 'rgw_keystone_accepted_roles' may not be modified at runtime
Jan 31 07:16:52 compute-1 ceph-osd[79145]: set_mon_vals failed to set rgw_keystone_admin_domain = default: Configuration option 'rgw_keystone_admin_domain' may not be modified at runtime
Jan 31 07:16:52 compute-1 ceph-osd[79145]: set_mon_vals failed to set rgw_keystone_admin_password = 12345678: Configuration option 'rgw_keystone_admin_password' may not be modified at runtime
Jan 31 07:16:52 compute-1 ceph-osd[79145]: set_mon_vals failed to set rgw_keystone_admin_project = service: Configuration option 'rgw_keystone_admin_project' may not be modified at runtime
Jan 31 07:16:52 compute-1 ceph-osd[79145]: set_mon_vals failed to set rgw_keystone_admin_user = swift: Configuration option 'rgw_keystone_admin_user' may not be modified at runtime
Jan 31 07:16:52 compute-1 ceph-osd[79145]: set_mon_vals failed to set rgw_keystone_implicit_tenants = true: Configuration option 'rgw_keystone_implicit_tenants' may not be modified at runtime
Jan 31 07:16:52 compute-1 ceph-osd[79145]: set_mon_vals failed to set rgw_keystone_url = https://keystone-internal.openstack.svc:5000: Configuration option 'rgw_keystone_url' may not be modified at runtime
Jan 31 07:16:52 compute-1 ceph-osd[79145]: operator() osd_memory_target cleared (was 5502906777)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _update_cache_settings updated pcm target: 4294967296 pcm min: 134217728 pcm max: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:17.754325+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 409600 heap: 71163904 old mem: 3872164391 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:18.754499+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 409600 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:19.754653+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 401408 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:20.754745+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 393216 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:21.754879+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 385024 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:22.754997+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 385024 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:23.755152+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 376832 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:24.755308+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 376832 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:25.755514+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 368640 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:26.755731+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 368640 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:27.755895+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 368640 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:28.756057+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 360448 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:29.756255+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 360448 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:30.756515+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 360448 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:31.756699+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 352256 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:33.149904+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 352256 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:34.150083+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 344064 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:35.150210+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 344064 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:36.150388+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 344064 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:37.150522+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 335872 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:38.150667+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 335872 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:39.150819+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 335872 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:40.151013+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 327680 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:41.151193+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 327680 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:42.151350+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 319488 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:43.151490+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 311296 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:44.151683+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 311296 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:45.151829+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 311296 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:46.151999+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 303104 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:47.152135+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 303104 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:48.152285+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 294912 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:49.152453+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 294912 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:50.152652+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 294912 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:51.152967+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 286720 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:52.153092+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 286720 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:53.153208+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 286720 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:54.153377+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 278528 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:55.153486+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 278528 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:56.153679+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70893568 unmapped: 270336 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:57.153818+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70893568 unmapped: 270336 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:58.153962+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70893568 unmapped: 270336 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:59:59.154757+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 262144 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:00.154992+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 262144 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:01.155704+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 253952 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:02.155925+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 253952 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:03.156106+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 237568 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:04.156294+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 237568 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:05.156459+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 237568 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:06.156766+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 229376 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:07.156941+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 229376 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:08.157107+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 221184 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:09.157266+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 212992 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:10.157463+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 212992 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:11.157688+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 204800 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:12.157876+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 204800 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:13.158058+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 204800 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:14.158309+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 196608 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:15.158478+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 196608 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:16.158695+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 188416 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:17.159004+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 188416 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:18.159198+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 188416 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:19.159419+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 180224 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:20.159730+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 180224 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:21.159947+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 172032 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:22.160137+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 172032 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:23.160334+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 172032 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:24.160550+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 163840 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:25.160719+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 163840 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:26.160896+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 155648 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:27.161030+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 155648 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:28.161206+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 155648 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:29.161364+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 147456 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:30.161502+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 147456 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:31.161700+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 147456 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:32.161956+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 139264 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:33.162147+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 139264 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:34.162394+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 131072 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:35.162592+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 131072 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:36.162774+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 131072 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:37.162980+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 122880 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:38.163131+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 122880 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 336.210723877s of 336.216827393s, submitted: 2
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:39.163331+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 98304 heap: 71163904 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:40.163490+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1990656 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:41.163733+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 1581056 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:42.163900+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 1572864 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:43.164109+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1556480 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:44.164295+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1556480 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:45.164497+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 1548288 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:46.164686+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 1548288 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:47.165119+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 1540096 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:48.165346+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 1540096 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:49.165498+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 1540096 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:50.165647+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 1531904 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:51.165904+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 1531904 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:52.166085+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 1531904 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:53.166275+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 1531904 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:54.166703+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 1531904 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:55.166976+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 1523712 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:56.167164+0000)
Jan 31 07:16:52 compute-1 ceph-mon[81728]: from='client.24887 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 1523712 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:57.167380+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 1523712 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:58.167564+0000)
Jan 31 07:16:52 compute-1 ceph-mon[81728]: pgmap v985: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 1515520 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/2314684662' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:00:59.167740+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 1515520 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/246216183' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:00.167947+0000)
Jan 31 07:16:52 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/2666598803' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 1515520 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-mon[81728]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:01.168155+0000)
Jan 31 07:16:52 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/729368431' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 1515520 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:02.168319+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-mon[81728]: from='client.24905 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/1459796621' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1507328 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-mon[81728]: from='client.15114 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:03.168496+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1499136 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/2995359576' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:04.168697+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1499136 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/860072631' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/1847725198' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:05.169201+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1499136 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-mon[81728]: from='client.24920 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:06.169341+0000)
Jan 31 07:16:52 compute-1 ceph-mon[81728]: from='client.15126 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 1490944 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2259964109' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:07.169513+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/867841681' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 1490944 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:08.169702+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 1490944 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:09.169840+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 1490944 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:10.170035+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 1482752 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:11.170221+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 1482752 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:12.170393+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 1482752 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:13.170542+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 1482752 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:14.170958+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1474560 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:15.171168+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1474560 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:16.171322+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1474560 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:17.173797+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1474560 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:18.174092+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1474560 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:19.174336+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1466368 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:20.174544+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1466368 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:21.174747+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1466368 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:22.174929+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1458176 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:23.175119+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1433600 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:24.175305+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1433600 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:25.175446+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1433600 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:26.175581+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1433600 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:27.175706+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1433600 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:28.175871+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1433600 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:29.176032+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1433600 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:30.176168+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1433600 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:31.176315+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1433600 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:32.176444+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1433600 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:33.176588+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1433600 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:34.176789+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1433600 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:35.176920+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1433600 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:36.177090+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1433600 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:37.177239+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1433600 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:38.177395+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1433600 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:39.177533+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1433600 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:40.177665+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1433600 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:41.177846+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1433600 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:42.178026+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1433600 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:43.178173+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 1425408 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:44.178362+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 1425408 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:45.178487+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 1425408 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:46.178630+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 1425408 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:47.181631+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 1425408 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:48.181793+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1417216 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:49.181936+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1417216 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:50.182089+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1417216 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:51.182237+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1417216 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:52.182417+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1417216 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:53.182587+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1417216 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:54.182769+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1417216 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:55.182923+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1417216 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:56.183097+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1417216 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:57.183253+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1417216 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:58.183373+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1409024 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:01:59.183494+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1409024 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:00.183778+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1409024 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:01.183948+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1409024 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:02.184106+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1409024 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:03.184248+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1392640 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:04.184420+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1392640 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:05.184543+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1392640 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:06.184700+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1392640 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:07.184924+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1392640 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:08.185127+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1392640 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:09.185264+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1392640 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:10.185436+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1384448 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:11.185579+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1384448 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:12.185762+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1384448 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:13.185913+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1384448 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:14.186127+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1384448 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:15.186255+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1384448 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:16.186944+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1384448 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:17.187107+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1384448 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:18.187256+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1384448 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:19.187417+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1384448 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:20.187558+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1384448 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:21.187697+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1384448 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:22.187928+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1384448 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:23.188054+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1368064 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:24.188209+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1368064 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:25.188388+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1368064 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:26.188587+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1368064 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:27.188727+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1368064 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:28.188904+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1368064 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:29.189051+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1368064 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:30.189232+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1359872 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:31.189370+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1359872 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:32.189515+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1359872 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:33.189636+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1335296 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:34.189790+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1335296 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:35.189953+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1335296 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:36.190151+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1335296 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:37.190302+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1335296 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:38.190446+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1335296 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:39.190584+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1335296 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:40.190728+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1335296 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:41.191088+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1335296 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:42.191232+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1335296 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:43.191357+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1310720 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:44.191546+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1310720 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:45.191708+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1310720 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:46.191904+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1310720 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:47.192047+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1310720 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:48.192206+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1310720 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:49.192358+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1310720 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:50.192498+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1310720 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:51.192618+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1310720 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:52.192794+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1310720 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:53.192960+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1310720 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:54.193192+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1310720 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:55.193891+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1310720 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:56.194041+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1310720 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:57.194176+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1310720 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:58.194323+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1302528 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:02:59.194466+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1302528 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:00.194637+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1302528 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:01.194766+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 1302528 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:02.194924+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1286144 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:03.195052+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1286144 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:04.195197+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1286144 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:05.195321+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1286144 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:06.195450+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1286144 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:07.195601+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1286144 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:08.195740+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1286144 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:09.195954+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1286144 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:10.196122+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1286144 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:11.196255+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1286144 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:12.196428+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1286144 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:13.196550+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1286144 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:14.196729+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 1277952 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:15.196879+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 1277952 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:16.197005+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1269760 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:17.197159+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1261568 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:18.197291+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1261568 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:19.197447+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1261568 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:20.197626+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1261568 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:21.197817+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1261568 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:22.198001+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1245184 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:23.198146+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1245184 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:24.198311+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1245184 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:25.198463+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1245184 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:26.198613+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1245184 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:27.198742+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1245184 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:28.198891+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1245184 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:29.199049+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1245184 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:30.199186+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1245184 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:31.199333+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1245184 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:32.199525+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1245184 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:33.199656+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1245184 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:34.199938+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1245184 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:35.200087+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1245184 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:36.200281+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 1236992 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:37.200425+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:38.200572+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 1236992 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:39.200711+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 1236992 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:40.200831+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 1236992 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:41.200981+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 1236992 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:42.201139+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 1236992 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:43.201258+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1220608 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:44.201523+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1220608 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:45.201672+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1220608 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:46.201807+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1220608 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:47.201937+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1220608 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:48.202048+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1220608 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:49.202203+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1220608 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:50.202409+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1220608 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:51.202638+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1220608 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:52.202844+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1220608 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:53.203072+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1220608 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:54.203340+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1220608 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:55.203479+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1220608 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:56.203642+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1220608 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:57.203844+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 1212416 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:58.204037+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 1212416 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:03:59.204174+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 1212416 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:00.204307+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 1212416 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:01.204496+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 1212416 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:02.204688+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 1212416 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:03.204918+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 1187840 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:04.205127+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 1187840 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:05.205250+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 1179648 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:06.205400+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 1179648 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:07.205541+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 1179648 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:08.205737+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 1179648 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 ms_handle_reset con 0x55c83ffdcc00 session 0x55c84003a3c0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: handle_auth_request added challenge on 0x55c84041b400
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 ms_handle_reset con 0x55c83ffdd000 session 0x55c84003b2c0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: handle_auth_request added challenge on 0x55c8416ad800
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:09.205951+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 1155072 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:10.206113+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 1155072 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:11.206310+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 1155072 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:12.206568+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1146880 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:13.206708+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1146880 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:14.206906+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1146880 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:15.207084+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1146880 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:16.207258+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1146880 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:17.207429+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1146880 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:18.207587+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1138688 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:19.207744+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1138688 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:20.208090+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1138688 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:21.208261+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1138688 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:22.208388+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1138688 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:23.208569+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1122304 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:24.208834+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1122304 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:25.209065+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1122304 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:26.209343+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1122304 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:27.209536+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1122304 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:28.209731+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1122304 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:29.209887+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1122304 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:30.210042+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1122304 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:31.210157+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1122304 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:32.210308+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1122304 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:33.210462+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1122304 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:34.210632+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1122304 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:35.210824+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1122304 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:36.210994+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1122304 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:37.211164+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1122304 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:38.211304+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1122304 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:39.211434+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1114112 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:40.211571+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1114112 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:41.211727+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1114112 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:42.211879+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1114112 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:43.212005+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1097728 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:44.212230+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1097728 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:45.212388+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1097728 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:46.212609+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:47.213002+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:48.213153+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:49.213305+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:50.213492+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:51.213655+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:52.213792+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:53.213947+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:54.214123+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:55.214273+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:56.214433+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:57.214649+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:58.214784+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:04:59.214927+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:00.215086+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:01.215238+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:02.215404+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1073152 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:03.215541+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 1056768 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:04.215746+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1114112 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:05.215906+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1114112 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:06.216136+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1114112 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:07.216367+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1114112 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:08.216552+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1114112 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:09.217062+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1114112 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:10.217216+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1114112 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:11.217428+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1114112 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:12.217622+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1114112 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:13.217792+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1114112 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:14.217952+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1105920 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:15.218074+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1105920 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:16.218219+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1105920 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:17.218398+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1105920 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:18.218630+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1105920 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:19.218779+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1105920 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:20.218933+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1097728 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:21.219083+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1097728 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:22.219213+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1097728 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:23.219414+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1097728 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:24.219605+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1097728 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:25.219794+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1097728 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:26.219950+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1097728 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:27.220142+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1097728 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:28.220296+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1097728 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:29.220452+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1097728 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:30.220634+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1097728 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:31.220782+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1097728 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:32.220930+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1097728 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:33.221106+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 ms_handle_reset con 0x55c8414acc00 session 0x55c83ff76780
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: handle_auth_request added challenge on 0x55c83f50a000
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:34.221275+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:35.221421+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:36.221562+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:37.221747+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:38.221947+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:39.222071+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:40.222220+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:41.222467+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:42.222615+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:43.222739+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:44.223028+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:45.223164+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:46.223395+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:47.223554+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:48.223746+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:49.224260+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:50.224430+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:51.224936+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:52.225260+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1089536 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:53.225682+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:54.225925+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:55.226083+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:56.226442+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:57.226782+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:58.226946+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:05:59.227397+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:00.227649+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:01.227924+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:02.228214+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:03.228454+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:04.228724+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:05.228872+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:06.229074+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:07.229236+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1081344 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:08.229441+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1064960 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:09.229622+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1064960 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:10.229763+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1064960 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:11.229975+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1064960 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:12.230131+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1064960 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:13.230302+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1064960 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:14.230533+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1064960 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:15.230727+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1064960 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:16.230909+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1064960 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:17.231052+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1064960 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:18.231190+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1048576 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:19.231359+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1048576 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:20.231516+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1048576 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:21.231656+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1048576 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:22.231800+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1048576 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:23.231900+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1048576 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:24.232047+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1048576 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:25.232184+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1048576 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:26.232337+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1048576 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:27.232469+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1048576 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:28.232617+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1032192 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:29.232826+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1032192 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:30.233003+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1032192 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:31.233213+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1032192 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:32.233434+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1032192 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:33.233627+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1032192 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:34.233875+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1032192 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:35.234072+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1032192 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:36.234280+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1032192 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:37.234458+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1032192 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:38.234626+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1032192 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:39.234770+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1032192 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:40.234958+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1032192 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:41.235136+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1032192 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:42.235302+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1024000 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:43.235422+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1024000 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:44.235604+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1024000 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:45.235762+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1024000 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:46.235915+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1024000 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:47.236072+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1024000 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:48.236213+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1007616 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:49.236375+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1007616 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:50.236514+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1007616 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:51.236895+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1007616 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:52.237344+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1007616 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:53.237679+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1007616 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:54.237966+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1007616 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:55.238305+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1007616 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:56.238626+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1007616 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:57.238953+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1007616 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:58.239171+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1007616 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:06:59.239424+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1007616 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:00.239684+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1007616 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:01.239918+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1007616 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:02.240138+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1007616 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:03.240278+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 999424 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:04.240567+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 999424 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:05.240828+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 991232 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:06.241125+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 991232 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:07.241352+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 991232 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:08.241599+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 974848 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:09.241951+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 974848 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:10.242209+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 974848 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:11.242452+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 974848 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:12.242671+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 974848 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:13.242956+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 974848 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:14.243216+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 974848 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:15.243411+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 966656 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:16.243619+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 966656 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:17.243798+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 966656 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:18.244044+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 966656 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:19.244244+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 966656 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:20.244437+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 966656 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:21.244662+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 966656 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:22.245922+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 966656 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:23.246205+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 966656 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:24.246481+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 966656 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:25.246660+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 966656 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:26.246831+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 966656 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:27.247048+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 966656 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:28.247317+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 950272 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:29.247525+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 950272 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:30.247643+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 950272 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:31.247798+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 950272 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:32.248031+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 950272 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:33.248227+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 950272 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:34.248431+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 950272 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:35.248628+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 950272 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:36.248795+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 950272 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:37.249020+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 950272 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:38.249159+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 950272 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:39.249303+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 950272 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:40.249500+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 950272 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:41.249934+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 950272 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:42.250128+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 950272 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:43.250364+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 950272 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:44.250636+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 950272 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:45.250915+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 950272 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:46.251050+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 950272 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:47.251180+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 925696 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:48.251355+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 925696 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:49.251511+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 925696 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:50.251706+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 925696 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:51.251953+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 925696 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:52.252111+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 925696 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:53.252291+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 925696 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:54.252456+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 917504 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:55.252572+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 917504 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:56.252708+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 917504 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:57.252813+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 917504 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:58.252964+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 917504 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:07:59.253108+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 917504 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:00.253245+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 917504 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:01.253387+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 917504 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:02.253474+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 917504 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:03.253567+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 917504 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:04.253741+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 917504 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:05.253917+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 917504 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:06.254118+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 917504 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:07.254294+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 901120 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:08.254471+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 892928 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:09.254637+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 892928 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:10.254775+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 892928 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:11.254905+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 892928 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:12.255098+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 892928 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:13.255236+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 892928 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:14.255413+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 892928 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:15.255589+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 892928 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:16.255771+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 892928 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:17.255902+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 884736 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:18.256038+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:19.256151+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 884736 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:20.256299+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 884736 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:21.256434+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:22.256549+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:23.256717+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:24.256950+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:25.257528+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:26.257824+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:27.258092+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:28.258262+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:29.258765+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:30.258975+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:31.259381+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:32.259705+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:33.259933+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:34.260235+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:35.260396+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:36.260594+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:37.260758+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:38.260913+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:39.261155+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 6383 writes, 26K keys, 6383 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6383 writes, 1107 syncs, 5.77 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 495 writes, 762 keys, 495 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s
                                           Interval WAL: 495 writes, 240 syncs, 2.06 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c83dd6b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:40.261415+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:41.261568+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:42.261726+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:43.261996+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:44.262255+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:45.262472+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:46.262687+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:47.262819+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:48.262915+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:49.263068+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:50.263218+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:51.263356+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:52.263542+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:53.263700+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:54.263964+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 876544 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:55.264164+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 868352 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:56.264333+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 868352 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:57.264449+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 868352 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:58.264564+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 868352 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:08:59.264711+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 868352 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:00.264920+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 868352 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:01.265068+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 868352 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:02.265212+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 868352 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:03.265350+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 860160 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:04.265633+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 860160 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:05.265791+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 860160 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:06.265926+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 860160 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:07.266076+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 860160 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:08.266199+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 860160 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:09.266389+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 860160 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:10.266579+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 860160 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:11.266725+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 860160 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:12.266950+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 860160 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:13.267127+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 860160 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:14.267362+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 860160 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:15.267532+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 860160 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:16.267690+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 860160 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:17.267913+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 860160 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:18.268167+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 860160 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:19.268335+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 860160 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:20.268500+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 860160 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:21.268670+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 860160 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:22.268847+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:23.269006+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:24.269176+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:25.269325+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:26.269475+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:27.269686+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:28.269922+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:29.270115+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:30.270390+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:31.270557+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:32.270745+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:33.270949+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:34.271135+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:35.271318+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:36.271499+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:37.271736+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:38.271950+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:39.272135+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:40.272298+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:41.272447+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:42.272605+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:43.272798+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:44.273078+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:45.273232+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:46.273455+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:47.273632+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 851968 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:48.273807+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 835584 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:49.274017+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 835584 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:50.274230+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 835584 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:51.274421+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 835584 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:52.274579+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 835584 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:53.274750+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 835584 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:54.274963+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 835584 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:55.275104+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 835584 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:56.275340+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 835584 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:57.275498+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 835584 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:58.275656+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 835584 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:09:59.275906+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 827392 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:00.276103+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 827392 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:01.276423+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 827392 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:02.277040+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 827392 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:03.277578+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 827392 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:04.277884+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 827392 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:05.278106+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 827392 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:06.278475+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 827392 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:07.278648+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 827392 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:08.279072+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 811008 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:09.279258+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 811008 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:10.279531+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 811008 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:11.279655+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 811008 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:12.279817+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 811008 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:13.280032+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 811008 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:14.280240+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 811008 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:15.280380+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 811008 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:16.280515+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 811008 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:17.280674+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 811008 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:18.280890+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 811008 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:19.281054+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 811008 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:20.281279+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 811008 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:21.281473+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 811008 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:22.281681+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 811008 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:23.281908+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 811008 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:24.282205+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 811008 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:25.282440+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 811008 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:26.282621+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 811008 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:27.282775+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 811008 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:28.283004+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 794624 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:29.283199+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 794624 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:30.283342+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 794624 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:31.283546+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 794624 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:32.283751+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 794624 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:33.284074+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 794624 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:34.284282+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 794624 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:35.284435+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 786432 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:36.284589+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 786432 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:37.284822+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 786432 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:38.285028+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 786432 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:39.285188+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 599.392883301s of 600.400268555s, submitted: 255
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 786432 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:40.285322+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 737280 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:41.285531+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 720896 heap: 75358208 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:42.285680+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 1671168 heap: 76406784 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:43.285907+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 1613824 heap: 76406784 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:44.286182+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 1613824 heap: 76406784 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:45.286382+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 1605632 heap: 76406784 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:46.286623+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 1605632 heap: 76406784 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:47.286933+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 1605632 heap: 76406784 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:48.287124+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 1605632 heap: 76406784 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:49.287316+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 1605632 heap: 76406784 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:50.287535+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 1605632 heap: 76406784 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:51.287729+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 1605632 heap: 76406784 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:52.287934+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 1605632 heap: 76406784 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:53.288098+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 1597440 heap: 76406784 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:54.288323+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835568 data_alloc: 218103808 data_used: 155648
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 1597440 heap: 76406784 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:55.288481+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: handle_auth_request added challenge on 0x55c841f45800
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 1581056 heap: 76406784 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 heartbeat osd_stat(store_statfs(0x1bca51000/0x0/0x1bfc00000, data 0x1275a0/0x1cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:56.288682+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 1581056 heap: 76406784 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _renew_subs
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.752384186s of 17.515455246s, submitted: 229
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:57.288848+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _renew_subs
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 499712 heap: 76406784 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:58.289070+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 17113088 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:10:59.289236+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905190 data_alloc: 218103808 data_used: 163840
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76079104 unmapped: 17113088 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:00.289403+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 120 heartbeat osd_stat(store_statfs(0x1bc247000/0x0/0x1bfc00000, data 0x92b11d/0x9d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 17080320 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:01.289592+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 17072128 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:02.289787+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:03.290018+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:04.290264+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:05.290417+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:06.290934+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:07.291453+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:08.291762+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:09.292544+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:10.293489+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:11.294143+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:12.294323+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:13.294538+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:14.294953+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:15.295187+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:16.295362+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:17.295607+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:18.295826+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:19.296048+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:20.296229+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:21.296414+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:22.296612+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:23.296757+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:24.296947+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:25.297105+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:26.297251+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:27.297438+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:28.297761+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:29.297922+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:30.298083+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:31.298310+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:32.298510+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:33.298672+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:34.298909+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:35.299084+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:36.299419+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:37.299681+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:38.299972+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:39.300260+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:40.300443+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:41.300647+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:42.300895+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:43.301047+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:44.301311+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:45.301530+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:46.301720+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:47.301896+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:48.302030+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:49.302226+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:50.302344+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:51.302536+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:52.302732+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:53.302947+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:54.304225+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:55.304407+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:56.304545+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:57.304722+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:58.304909+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:11:59.305078+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:00.305272+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:01.305478+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:02.305656+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:03.305826+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:04.306031+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:05.306188+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:06.306345+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:07.306493+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:08.306659+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:09.306825+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:10.306986+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:11.307181+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:12.307355+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:13.307507+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:14.307755+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:15.307946+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:16.308138+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:17.308289+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:18.308418+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:19.308583+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:20.308722+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:21.308929+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:22.309065+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:23.309191+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:24.309353+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:25.309498+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:26.309644+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:27.309803+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:28.309957+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:29.310158+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:30.310328+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:31.310533+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:32.310671+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:33.310906+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:34.311168+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:35.311345+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:36.311526+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:37.311691+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:38.311844+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:39.312065+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:40.312238+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:41.312452+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:42.312615+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:43.312771+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:44.312943+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:45.313094+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:46.313260+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:47.313480+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:48.313641+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:49.313842+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:50.314024+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:51.314238+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:52.314418+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:53.314587+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:54.314777+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:55.314924+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:56.315079+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:57.315274+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:58.315423+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:12:59.315661+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:00.315916+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:01.316112+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:02.316270+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:03.316438+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:04.316607+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:05.316800+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:06.317021+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:07.317216+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:08.317440+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:09.317683+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:10.317934+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:11.318136+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:12.318326+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:13.318532+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:14.318737+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:15.318955+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:16.319157+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:17.319316+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:18.320126+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:19.320329+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:20.320486+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:21.320644+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:22.320789+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:23.320985+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 17047552 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:24.321287+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:25.321423+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:26.321580+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:27.321785+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:28.321931+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:29.322142+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:30.322301+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:31.322465+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:32.322609+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:33.322810+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:34.323037+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76152832 unmapped: 17039360 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:35.323198+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:36.323378+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:37.323548+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:38.323925+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:39.324198+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:40.324389+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:41.324555+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:42.324701+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:43.324846+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:44.325050+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:45.325234+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:46.325442+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:47.325617+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:48.325757+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:49.325951+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:50.326112+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:51.326341+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:52.326479+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:53.326603+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:54.326795+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:55.326975+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:56.327173+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:57.327318+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:58.327471+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:13:59.327631+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:00.327780+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:01.327928+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:02.328089+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:03.328234+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:04.328461+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:05.328617+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:06.328786+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:07.328986+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:08.329271+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:09.329496+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:10.329743+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:11.329971+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:12.330189+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:13.330360+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:14.330540+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:15.330746+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:16.330951+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:17.331107+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:18.331329+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:19.331560+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:20.331771+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:21.331963+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:22.332111+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:23.332239+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:24.332495+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:25.332656+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:26.332910+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:27.333067+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:28.333260+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:29.333430+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:30.333591+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:31.333755+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:32.333997+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:33.334221+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:34.334465+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:35.357101+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:36.357317+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:37.359734+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:38.360014+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:39.360320+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:40.360486+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:41.360750+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:42.361160+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:43.361329+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:44.361560+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:45.361747+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:46.361967+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:47.362137+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:48.362308+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:49.362479+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:50.362651+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:51.362920+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:52.363155+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:53.363384+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:54.363668+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:55.363959+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:56.364378+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:57.364618+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 17031168 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:58.364840+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:14:59.365052+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:00.365296+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:01.365556+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:02.365694+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:03.365896+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:04.366325+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:05.366548+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:06.366686+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:07.366855+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:08.367031+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:09.367548+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:10.367750+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:11.367959+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:12.368208+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:13.368388+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:14.368730+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:15.368999+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:16.369202+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:17.369377+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:18.369588+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:19.369784+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:20.370107+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76169216 unmapped: 17022976 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:21.370258+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 17014784 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:22.370450+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 17014784 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:23.370672+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 17014784 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:24.370927+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 17014784 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:25.371192+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 17014784 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:26.371462+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 17014784 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:27.371701+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 17014784 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:28.371978+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 17014784 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:29.372148+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 17014784 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:30.374354+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 17014784 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:31.374546+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 17014784 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:32.374748+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 17014784 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:33.375012+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:34.375256+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 17014784 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:35.375410+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 17014784 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:36.375558+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 17014784 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:37.375776+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 17014784 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:38.375952+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 17014784 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:39.376132+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 17014784 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:40.376357+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:41.376502+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:42.376927+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:43.377097+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:44.377286+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:45.377455+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:46.377693+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:47.377890+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:48.378044+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:49.378212+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:50.378345+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:51.378497+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:52.378689+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:53.378894+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:54.379114+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:55.379384+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:56.379541+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:57.379717+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:58.379962+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:15:59.380213+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:00.380445+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:01.380629+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:02.380764+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:03.381239+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:04.381443+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:05.381600+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:06.381742+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:07.381934+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:08.382126+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:09.382277+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:10.382620+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:11.382785+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:12.383056+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:13.383235+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:14.383437+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:15.383816+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:16.384045+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:17.384220+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: osd.1 121 heartbeat osd_stat(store_statfs(0x1bc245000/0x0/0x1bfc00000, data 0x92cd7e/0x9d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:18.384355+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 17006592 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:19.384507+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: do_command 'config diff' '{prefix=config diff}'
Jan 31 07:16:52 compute-1 ceph-osd[79145]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 16809984 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: do_command 'config show' '{prefix=config show}'
Jan 31 07:16:52 compute-1 ceph-osd[79145]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 07:16:52 compute-1 ceph-osd[79145]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 07:16:52 compute-1 ceph-osd[79145]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 07:16:52 compute-1 ceph-osd[79145]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 07:16:52 compute-1 ceph-osd[79145]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:20.384658+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 16302080 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 07:16:52 compute-1 ceph-osd[79145]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 07:16:52 compute-1 ceph-osd[79145]: bluestore.MempoolThread(0x55c83de49b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907204 data_alloc: 218103808 data_used: 167936
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: tick
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_tickets
Jan 31 07:16:52 compute-1 ceph-osd[79145]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T07:16:21.384845+0000)
Jan 31 07:16:52 compute-1 ceph-osd[79145]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 16228352 heap: 93192192 old mem: 2845415833 new mem: 2845415833
Jan 31 07:16:52 compute-1 ceph-osd[79145]: do_command 'log dump' '{prefix=log dump}'
Jan 31 07:16:52 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:52 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:52 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:52.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:52 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 07:16:52 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3247444512' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 07:16:53 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 31 07:16:53 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3576274316' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 07:16:53 compute-1 crontab[226799]: (root) LIST (root)
Jan 31 07:16:53 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:53 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:53 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:53.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:53 compute-1 ceph-mon[81728]: from='client.24940 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/3054835135' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 07:16:53 compute-1 ceph-mon[81728]: from='client.24941 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:53 compute-1 ceph-mon[81728]: from='client.15135 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:53 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2775754417' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 07:16:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/3820177980' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 07:16:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/4135518504' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 07:16:53 compute-1 ceph-mon[81728]: from='client.24950 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:53 compute-1 ceph-mon[81728]: from='client.15156 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/3842944403' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 07:16:53 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1554 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:16:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/3247444512' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 07:16:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/811136538' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 07:16:53 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/821763652' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 07:16:53 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 31 07:16:53 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2192889195' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 07:16:54 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:54 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:16:54 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:54.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:16:54 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 31 07:16:54 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3427705111' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 07:16:55 compute-1 ceph-mon[81728]: from='client.24962 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:55 compute-1 ceph-mon[81728]: pgmap v986: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:55 compute-1 ceph-mon[81728]: from='client.15171 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:55 compute-1 ceph-mon[81728]: from='client.24988 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:55 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:55 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/3576274316' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 07:16:55 compute-1 ceph-mon[81728]: from='client.24977 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:55 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/1482759960' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 07:16:55 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/116236357' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 07:16:55 compute-1 ceph-mon[81728]: from='client.25003 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:55 compute-1 ceph-mon[81728]: from='client.24995 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:16:55 compute-1 ceph-mon[81728]: from='client.15192 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:55 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2192889195' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 07:16:55 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/1366413530' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 07:16:55 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/3396256351' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 07:16:55 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:55 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/2646627464' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 07:16:55 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/366351136' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 07:16:55 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 31 07:16:55 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1031423109' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 07:16:55 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 31 07:16:55 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3064281076' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 07:16:55 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Jan 31 07:16:55 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3047624025' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 07:16:55 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Jan 31 07:16:55 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1064318126' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 07:16:55 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:55 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:55 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:55.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:55 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 31 07:16:55 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/944213257' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 31 07:16:56 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/597311083' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:56 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 31 07:16:56 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/243168414' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 07:16:56 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:56 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:16:56 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:56.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.25018 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.25013 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.15210 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.25033 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.15222 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.25045 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/3427705111' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/1061913832' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.25046 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: pgmap v987: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.15240 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.25057 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/2152750469' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/1031423109' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/3064281076' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/1140399047' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.25072 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/3047624025' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/1064318126' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.15273 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/3700791227' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/944213257' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/597311083' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/243168414' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 07:16:56 compute-1 systemd[1]: Starting Hostname Service...
Jan 31 07:16:56 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 31 07:16:56 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2444736972' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 07:16:56 compute-1 systemd[1]: Started Hostname Service.
Jan 31 07:16:56 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 31 07:16:56 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3150107905' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 07:16:56 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 31 07:16:56 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3046455468' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 07:16:57 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Jan 31 07:16:57 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2968818811' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 07:16:57 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 31 07:16:57 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4100124836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 07:16:57 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Jan 31 07:16:57 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2358365230' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 07:16:57 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:57 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:57 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:57.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:57 compute-1 ceph-mon[81728]: from='client.25087 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:16:57 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:57 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/625497177' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 07:16:57 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/772204015' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 07:16:57 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2444736972' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 07:16:57 compute-1 ceph-mon[81728]: from='client.25102 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:16:57 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/3150107905' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 07:16:57 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/850840323' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 07:16:57 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/1040105218' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 07:16:57 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/440618280' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 07:16:57 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/3046455468' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 07:16:57 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2968818811' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 07:16:57 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/3807619893' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 07:16:57 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/2567898811' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 07:16:57 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 31 07:16:57 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3008686762' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 07:16:58 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 31 07:16:58 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1901276115' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 07:16:58 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 31 07:16:58 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2189737042' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 07:16:58 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:58 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:16:58 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:58.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:16:58 compute-1 ceph-mon[81728]: pgmap v988: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:58 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:58 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/4100124836' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 07:16:58 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/3852268972' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 07:16:58 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/3551956014' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 07:16:58 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2358365230' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 07:16:58 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/1213005337' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 07:16:58 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/3008686762' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 07:16:58 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1559 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:16:58 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/2608529219' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 07:16:58 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/2232109545' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 07:16:58 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/232217152' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 07:16:58 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/1901276115' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 07:16:58 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:16:58 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/21536316' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 07:16:58 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2189737042' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 07:16:58 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/140695390' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 07:16:58 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/918116384' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 07:16:58 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/2400978574' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 07:16:58 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/4133065278' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 07:16:59 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:16:59 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:16:59 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:59.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:17:00 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 31 07:17:00 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3628672810' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 07:17:00 compute-1 ceph-mon[81728]: from='client.25135 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:17:00 compute-1 ceph-mon[81728]: from='client.25187 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:17:00 compute-1 ceph-mon[81728]: from='client.25196 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:17:00 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/2577577129' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 07:17:00 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/1081653028' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 07:17:00 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/2347524087' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 07:17:00 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/1701911331' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 07:17:00 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:17:00 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/1668043378' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 07:17:00 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/3036324244' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 07:17:00 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:17:00 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:00 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:00.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:00 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 31 07:17:00 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3727202116' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:01 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 31 07:17:01 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2311380550' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.25202 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: pgmap v989: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.25211 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.15405 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.15393 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.25223 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.15414 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.15420 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.25238 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/376874862' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/1927590070' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/3628672810' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.25250 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.15429 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/3736854517' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/125797110' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.15438 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.25268 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/3727202116' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/773264957' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/2675017524' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.25243 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2311380550' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/854097873' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/1219829903' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 07:17:01 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 31 07:17:01 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2171619823' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 07:17:01 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:17:01 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:01 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:01.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:02 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:17:02 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:17:02 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:02.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:17:02 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 07:17:02 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='client.15450 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: pgmap v990: 321 pgs: 1 active+clean+laggy, 320 active+clean; 8.4 MiB data, 160 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='client.25277 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='client.15465 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='client.25258 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/2171619823' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/2667654753' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='client.? 192.168.122.10:0/2667654753' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/327818538' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='client.25267 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='client.15483 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/2839043288' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/3121872008' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 07:17:02 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 07:17:02 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 07:17:02 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Jan 31 07:17:02 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4145722670' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 07:17:03 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 07:17:03 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 07:17:03 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:17:03 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:03 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:03.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:03 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Jan 31 07:17:03 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4156100970' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 31 07:17:03 compute-1 ceph-mon[81728]: from='client.25288 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:17:03 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:17:03 compute-1 ceph-mon[81728]: from='client.25318 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 07:17:03 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 07:17:03 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 07:17:03 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 07:17:03 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 07:17:03 compute-1 ceph-mon[81728]: Health check update: 2 slow ops, oldest one blocked for 1564 sec, osd.2 has slow ops (SLOW_OPS)
Jan 31 07:17:03 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/2676700891' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 07:17:03 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 07:17:03 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 07:17:03 compute-1 ceph-mon[81728]: from='client.? 192.168.122.101:0/4145722670' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 07:17:03 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 07:17:03 compute-1 ceph-mon[81728]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 07:17:03 compute-1 ceph-mon[81728]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'images' : 2 ])
Jan 31 07:17:03 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/4179986046' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 07:17:03 compute-1 ceph-mon[81728]: from='client.? 192.168.122.100:0/3404632863' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 07:17:03 compute-1 ceph-mon[81728]: from='client.? 192.168.122.102:0/2518986032' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 07:17:04 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Jan 31 07:17:04 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3712297915' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 31 07:17:04 compute-1 radosgw[83730]: ====== starting new request req=0x7f49eaa756f0 =====
Jan 31 07:17:04 compute-1 radosgw[83730]: ====== req done req=0x7f49eaa756f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:17:04 compute-1 radosgw[83730]: beast: 0x7f49eaa756f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:04.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:17:04 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 07:17:04 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 07:17:04 compute-1 ceph-mon[81728]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Jan 31 07:17:04 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1056368325' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 31 07:17:04 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 07:17:04 compute-1 ceph-mon[81728]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
